0% found this document useful (0 votes)
81 views3,218 pages

Reference Hdevelop

The document is the HALCON/HDevelop Operator Reference for version 24.11, published by MVTec Software GmbH. It contains a comprehensive list of operators and functions related to 1D measuring and 2D metrology, along with their descriptions and usage. The publication is protected by copyright and includes trademark information for various technologies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views3,218 pages

Reference Hdevelop

The document is the HALCON/HDevelop Operator Reference for version 24.11, published by MVTec Software GmbH. It contains a comprehensive list of operators and functions related to 1D measuring and 2D metrology, along with their descriptions and usage. The publication is protected by copyright and includes trademark information for various technologies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

a product of MVTec

HALCON/HDevelop
Operator Reference (en)

HALCON 24.11 Progress-Steady


HALCON/HDevelop 24.11.1.0
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the publisher.

Copyright © 1996-2024 by MVTec Software GmbH, Munich, Germany


AMD and AMD Athlon™ are either trademarks or registered trademarks of Advanced Micro Devices, Inc.
OpenCL™ and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos.
Arm® is a registered trademark of Arm Limited.
OpenGL® and the oval logo are either trademarks or registered trademarks of Hewlett Packard Enterprise in the United States
and/or other countries worldwide.
Intel® the Intel® logo, OpenVINO™ the OpenVINO™ logo, and Pentium® are either trademarks or registered trademarks of
Intel® Corporation or its subsidiaries.
Linux® is a registered trademark of Linus Torvalds.
Microsoft, Windows, Microsoft .NET, Visual C++ and Visual Basic are either trademarks or registered trademarks of Microsoft
Corporation.
CUDA, cuBLAS, and cuDNN are either trademarks or registered trademarks of NVIDIA Corporation.
Sun is a trademark of Oracle Corporation.
Python® is a registered trademark of the PSF.
UNIX® is a registered trademark of The Open Group.
All other nationally and internationally recognized trademarks and tradenames are hereby recognized.
More information about HALCON can be found at: http://www.mvtec.com
Contents

1 1D Measuring 1
close_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
deserialize_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
fuzzy_measure_pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
fuzzy_measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
fuzzy_measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
gen_measure_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
gen_measure_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
get_measure_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
measure_projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
measure_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
read_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
reset_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
serialize_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
set_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
set_fuzzy_measure_norm_pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
translate_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
write_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2 2D Metrology 31
add_metrology_object_circle_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
add_metrology_object_ellipse_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
add_metrology_object_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
add_metrology_object_line_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
add_metrology_object_rectangle2_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
align_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
apply_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
clear_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
clear_metrology_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
copy_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
create_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
deserialize_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
get_metrology_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
get_metrology_object_fuzzy_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
get_metrology_object_indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
get_metrology_object_measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
get_metrology_object_model_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
get_metrology_object_num_instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
get_metrology_object_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
get_metrology_object_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
get_metrology_object_result_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
read_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
reset_metrology_object_fuzzy_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
reset_metrology_object_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
serialize_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
set_metrology_model_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
set_metrology_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
set_metrology_object_fuzzy_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
set_metrology_object_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
write_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

3 3D Matching 77
3.1 3D Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
find_box_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.2 3D Gripping Point Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.3 Deep 3D Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
apply_deep_matching_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
get_deep_matching_3d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
read_deep_matching_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
set_deep_matching_3d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
write_deep_matching_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.4 Deformable Surface-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
add_deformable_surface_model_reference_point . . . . . . . . . . . . . . . . . . . . . . . . . 95
add_deformable_surface_model_sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
clear_deformable_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
clear_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
create_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
deserialize_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
find_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
get_deformable_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
get_deformable_surface_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
read_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
refine_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
serialize_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
write_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.5 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
clear_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
create_cam_pose_look_at_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
create_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
deserialize_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
find_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
get_shape_model_3d_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
get_shape_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
project_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
read_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
serialize_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
trans_pose_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
write_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.6 Surface-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
clear_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
clear_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
create_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
deserialize_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
find_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
find_surface_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
get_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
get_surface_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
read_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
refine_surface_model_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
refine_surface_model_pose_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
serialize_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
set_surface_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
write_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

4 3D Object Model 159


4.1 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
clear_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
copy_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
deserialize_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
gen_box_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
gen_cylinder_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
gen_empty_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
gen_object_model_3d_from_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
gen_plane_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
gen_sphere_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
gen_sphere_object_model_3d_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
read_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
remove_object_model_3d_attrib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
remove_object_model_3d_attrib_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
serialize_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
set_object_model_3d_attrib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
set_object_model_3d_attrib_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
union_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
write_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
4.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
area_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
distance_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
get_object_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
max_diameter_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
moments_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
select_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
smallest_bounding_box_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
smallest_sphere_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
volume_object_model_3d_relative_to_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
4.3 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
fit_primitives_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
reduce_object_model_3d_by_view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
segment_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
select_points_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
4.4 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
affine_trans_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
connection_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
convex_hull_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
edges_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
fuse_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
intersect_plane_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
object_model_3d_to_xyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
prepare_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
project_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
projective_trans_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
register_object_model_3d_global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
register_object_model_3d_pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
render_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
rigid_trans_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
sample_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
simplify_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
smooth_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
surface_normals_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
triangulate_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
xyz_to_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247

5 3D Reconstruction 249
5.1 Binocular Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
binocular_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
binocular_disparity_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
binocular_disparity_ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
binocular_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
binocular_distance_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
binocular_distance_ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
disparity_image_to_xyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
disparity_to_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
disparity_to_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
distance_to_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
essential_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
gen_binocular_proj_rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
gen_binocular_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
intersect_lines_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
match_essential_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
match_fundamental_matrix_distortion_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
match_fundamental_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
match_rel_pose_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
reconst3d_from_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
rel_pose_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
vector_to_essential_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
vector_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
vector_to_fundamental_matrix_distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
vector_to_rel_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.2 Depth From Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
depth_from_focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
select_grayvalues_from_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
5.3 Multi-View Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
clear_stereo_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
create_stereo_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
get_stereo_model_image_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
get_stereo_model_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
get_stereo_model_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
get_stereo_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
reconstruct_points_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
reconstruct_surface_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
set_stereo_model_image_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
set_stereo_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
5.4 Photometric Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
estimate_al_am . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
estimate_sl_al_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
estimate_sl_al_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
estimate_tilt_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
estimate_tilt_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
photometric_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
reconstruct_height_field_from_gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
sfs_mod_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
sfs_orig_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
sfs_pentland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
shade_height_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
uncalibrated_photometric_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
5.5 Sheet of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
apply_sheet_of_light_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
calibrate_sheet_of_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
clear_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
create_sheet_of_light_calib_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
create_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
deserialize_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
get_sheet_of_light_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
get_sheet_of_light_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
get_sheet_of_light_result_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
measure_profile_sheet_of_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
query_sheet_of_light_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
read_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
reset_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
serialize_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
set_profile_sheet_of_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
set_sheet_of_light_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
write_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
5.6 Structured Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369

6 Calibration 371
6.1 Binocular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
binocular_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
6.2 Calibration Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
caltab_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
create_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
disp_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
find_calib_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
find_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
find_marks_and_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
gen_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
sim_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
6.3 Camera Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
cam_mat_to_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
cam_par_to_cam_mat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
deserialize_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
read_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
serialize_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
write_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
6.4 Hand-Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
calibrate_hand_eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
get_calib_data_observ_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
hand_eye_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
set_calib_data_observ_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
6.5 Inverse Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
get_line_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
6.6 Monocular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
camera_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
6.7 Multi-View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
calibrate_cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
clear_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
clear_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
create_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
create_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
deserialize_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
deserialize_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
get_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
get_calib_data_observ_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
get_calib_data_observ_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
get_camera_setup_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
query_calib_data_observ_indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
read_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
read_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
remove_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
remove_calib_data_observ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
serialize_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
serialize_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
set_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
set_calib_data_calib_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
set_calib_data_cam_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
set_calib_data_observ_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
set_camera_setup_cam_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
set_camera_setup_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
write_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
write_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
6.8 Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
cam_par_pose_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
project_3d_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
project_hom_point_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
project_point_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
6.9 Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
change_radial_distortion_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
change_radial_distortion_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
change_radial_distortion_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
change_radial_distortion_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
contour_to_world_plane_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
gen_image_to_world_plane_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
gen_radial_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
image_points_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
image_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
6.10 Self-Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
radial_distortion_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
radiometric_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
stationary_camera_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491

7 Classification 497
7.1 Gaussian Mixture Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
add_class_train_data_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
add_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
classify_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
clear_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
clear_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
create_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
deserialize_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
evaluate_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
get_class_train_data_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
get_params_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
get_prep_info_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
get_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
get_sample_num_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
read_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
read_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
select_feature_set_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
serialize_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
train_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
write_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
write_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
7.2 K-Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
add_class_train_data_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
add_sample_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
classify_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
clear_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
create_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
deserialize_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
get_class_train_data_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
get_params_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
get_sample_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
get_sample_num_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
read_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
select_feature_set_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
serialize_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
set_params_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
train_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
write_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
7.3 Look-Up Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
clear_class_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
create_class_lut_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
create_class_lut_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
create_class_lut_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
create_class_lut_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
7.4 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
add_sample_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
clear_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
create_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
deserialize_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
get_sample_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
get_sample_num_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
read_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
select_sub_feature_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
serialize_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
set_feature_lengths_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
write_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
7.5 Neural Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
add_class_train_data_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
add_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
classify_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
clear_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
clear_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
create_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
deserialize_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
evaluate_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
get_class_train_data_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
get_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
get_prep_info_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
get_regularization_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
get_rejection_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
get_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
get_sample_num_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
read_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
read_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
select_feature_set_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
serialize_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
set_regularization_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
set_rejection_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
train_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
write_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
write_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
7.6 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
add_class_train_data_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
add_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
classify_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
clear_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
clear_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
create_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
deserialize_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
evaluate_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
get_class_train_data_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
get_params_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
get_prep_info_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
get_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
get_sample_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
get_support_vector_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
get_support_vector_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
read_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
read_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
reduce_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
select_feature_set_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
serialize_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
train_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
write_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
write_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603

8 Control 605
assign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
assign_at . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
break . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
catch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
comment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
continue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
convert_tuple_to_vector_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
convert_vector_to_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
elseif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
endfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
endif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
endswitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
endtry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
endwhile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
executable_expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
export_def . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
par_join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
throw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
try . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
until . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
while . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625

9 Deep Learning 627


get_dl_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
optimize_dl_model_for_inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 640
query_available_dl_devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 641
set_dl_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 643
9.1 Anomaly Detection and Global Context Anomaly Detection . . . . . . . . . . . . . . . . . . . . 643
train_dl_model_anomaly_dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
9.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
fit_dl_out_of_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
9.3 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
create_dl_layer_activation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
create_dl_layer_batch_normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659
create_dl_layer_class_id_conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662
create_dl_layer_concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
create_dl_layer_convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666
create_dl_layer_dense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
create_dl_layer_depth_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672
create_dl_layer_depth_to_space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 673
create_dl_layer_dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
create_dl_layer_elementwise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
create_dl_layer_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
create_dl_layer_input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
create_dl_layer_loss_cross_entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682
create_dl_layer_loss_ctc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
create_dl_layer_loss_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688
create_dl_layer_loss_focal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
create_dl_layer_loss_huber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 692
create_dl_layer_lrn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
create_dl_layer_matmul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
create_dl_layer_permutation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
create_dl_layer_pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
create_dl_layer_reduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
create_dl_layer_reshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
create_dl_layer_softmax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
create_dl_layer_transposed_convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
create_dl_layer_zoom_factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
create_dl_layer_zoom_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
create_dl_layer_zoom_to_layer_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713
create_dl_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
get_dl_layer_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
get_dl_model_layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
get_dl_model_layer_activations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
get_dl_model_layer_gradients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
get_dl_model_layer_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
get_dl_model_layer_weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
load_dl_model_weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 721
set_dl_model_layer_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
set_dl_model_layer_weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
9.4 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
add_dl_pruning_batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
apply_dl_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
clear_dl_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
create_dl_pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739
deserialize_dl_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
gen_dl_model_heatmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
gen_dl_pruned_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
get_dl_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
get_dl_pruning_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
read_dl_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
serialize_dl_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
set_dl_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
set_dl_pruning_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
train_dl_model_batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
write_dl_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
9.5 Multi-Label Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
9.6 Object Detection and Instance Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
create_dl_model_detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
9.7 Semantic Segmentation and Edge Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801

10 Develop 809
dev_clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
dev_clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
dev_close_inspect_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
dev_close_tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
dev_close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
dev_disp_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
dev_display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
dev_error_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
dev_get_exception_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
dev_get_preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
dev_get_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
dev_get_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
dev_inspect_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
dev_open_dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
dev_open_file_dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
dev_open_tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
dev_open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
dev_set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
dev_set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
dev_set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
dev_set_contour_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
dev_set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
dev_set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
dev_set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
dev_set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
dev_set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
dev_set_preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
dev_set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
dev_set_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
dev_set_tool_geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
dev_set_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
dev_set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
dev_show_tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
dev_update_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
dev_update_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
dev_update_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
dev_update_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845

11 File 847
11.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
close_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
fnew_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
fread_bytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
fread_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
fread_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
fread_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
fwrite_bytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
fwrite_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
11.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
deserialize_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
image_to_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
memory_block_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
read_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
read_image_metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
read_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
serialize_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
write_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
write_image_metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
11.3 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
copy_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
delete_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
file_exists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
get_current_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
list_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
make_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
read_world_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
remove_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
set_current_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
11.4 Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
deserialize_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
read_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
serialize_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
write_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
11.5 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
deserialize_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
read_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
serialize_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
write_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
11.6 Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
deserialize_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
deserialize_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
read_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
serialize_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
serialize_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
tuple_is_serializable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
tuple_is_serializable_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
write_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 882
11.7 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
deserialize_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
read_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
read_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
read_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
read_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
serialize_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
write_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
write_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
write_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
write_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893

12 Filters 895
12.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
abs_diff_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
abs_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
acos_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
add_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
asin_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
atan2_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
atan_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
cos_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
div_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
exp_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
gamma_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
invert_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
log_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
max_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
min_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
mult_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
pow_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
scale_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
sin_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
sqrt_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
sub_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
tan_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
12.2 Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
bit_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
bit_lshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
bit_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
bit_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920
bit_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
bit_rshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
bit_slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
bit_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
12.3 Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
apply_color_trans_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
cfa_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
clear_color_trans_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
create_color_trans_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
gen_principal_comp_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
linear_trans_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
principal_comp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
rgb1_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
rgb3_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
trans_from_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
trans_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
12.4 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
close_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
close_edges_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
derivate_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
diff_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
edges_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
edges_color_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
edges_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
edges_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
frei_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
frei_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964
highpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
info_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
kirsch_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
kirsch_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
laplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
laplace_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
prewitt_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
prewitt_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974
roberts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
robinson_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977
robinson_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
sobel_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
sobel_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
12.5 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
coherence_enhancing_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
emphasize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
equ_histo_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
equ_histo_image_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
illuminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
mean_curvature_flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
scale_image_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
shock_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
12.6 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
convol_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
convol_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
correlation_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
deserialize_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997
energy_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998
fft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
fft_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
fft_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
gen_bandfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
gen_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
gen_derivative_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
gen_filter_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
gen_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
gen_gauss_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
gen_highpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
gen_lowpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
gen_mean_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
gen_sin_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
gen_std_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
optimize_fft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
optimize_rft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
phase_correlation_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
phase_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
phase_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021
power_byte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
power_ln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
power_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
read_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024
rft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
serialize_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026
write_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
12.7 Geometric Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
affine_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
affine_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
convert_map_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
map_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1033
mirror_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
polar_trans_image_ext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
polar_trans_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
projective_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
projective_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
rotate_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
zoom_image_factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
zoom_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
12.8 Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
harmonic_interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
inpainting_aniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
inpainting_ced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050
inpainting_ct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1052
inpainting_mcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
inpainting_texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
12.9 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
bandpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
lines_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
lines_facet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
lines_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
12.10 Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066
exhaustive_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066
exhaustive_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067
gen_gauss_pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
monotony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1070
12.11 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
convol_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
deviation_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
expand_domain_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
gray_inside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
gray_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076
lut_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1078
topographic_sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079
12.12 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
add_noise_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
add_noise_white . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
gauss_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
noise_distribution_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083
sp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
12.13 Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
derivate_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
optical_flow_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086
unwarp_image_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
vector_field_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
12.14 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
corner_response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
dots_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
points_foerstner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
points_harris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
points_harris_binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103
points_lepetit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104
points_sojka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105
12.15 Scene Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107
scene_flow_calib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107
scene_flow_uncalib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109
12.16 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114
anisotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1119
bilateral_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120
binomial_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
eliminate_min_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
eliminate_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128
fill_interlace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129
gauss_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130
guided_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1132
info_smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
isotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
mean_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137
mean_image_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139
mean_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1140
mean_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
median_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
median_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144
median_separate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145
median_weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147
midrange_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
rank_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149
rank_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151
rank_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152
sigma_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154
smooth_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
trimmed_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
12.17 Texture Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158
deviation_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158
entropy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159
texture_laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
12.18 Wiener Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1163
gen_psf_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1163
gen_psf_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164
simulate_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
simulate_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
wiener_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
wiener_filter_ni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169

13 Graphics 1173
13.1 3D Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173
add_scene_3d_camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173
add_scene_3d_instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174
add_scene_3d_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174
add_scene_3d_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
clear_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
create_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
display_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
get_display_scene_3d_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180
remove_scene_3d_camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181
remove_scene_3d_instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181
remove_scene_3d_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
remove_scene_3d_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
render_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
set_scene_3d_camera_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
set_scene_3d_instance_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
set_scene_3d_instance_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
set_scene_3d_label_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
set_scene_3d_light_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
set_scene_3d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
set_scene_3d_to_world_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
13.2 Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
drag_region1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
drag_region2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
drag_region3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194
draw_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
draw_circle_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
draw_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
draw_ellipse_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
draw_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
draw_line_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
draw_nurbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
draw_nurbs_interp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
draw_nurbs_interp_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
draw_nurbs_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
draw_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
draw_point_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
draw_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
draw_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
draw_rectangle1_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
draw_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
draw_rectangle2_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
draw_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
draw_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
draw_xld_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
13.3 LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
get_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
query_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
13.4 Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
get_mbutton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
get_mbutton_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
get_mposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
get_mposition_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
get_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
query_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
send_mouse_double_click_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
send_mouse_down_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
send_mouse_drag_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
send_mouse_up_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
set_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
13.5 Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
attach_background_to_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
attach_drawing_object_to_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
clear_drawing_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
create_drawing_object_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237
create_drawing_object_circle_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
create_drawing_object_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
create_drawing_object_ellipse_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
create_drawing_object_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242
create_drawing_object_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
create_drawing_object_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
create_drawing_object_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
create_drawing_object_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
detach_background_from_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247
detach_drawing_object_from_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
get_drawing_object_iconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
get_drawing_object_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
get_window_background_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250
set_content_update_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1251
set_drawing_object_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
set_drawing_object_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253
set_drawing_object_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
13.6 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
disp_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
disp_arrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257
disp_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
disp_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
disp_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261
disp_cross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261
disp_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
disp_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
disp_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265
disp_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266
disp_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267
disp_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1271
disp_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
disp_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
disp_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275
disp_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
13.7 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
convert_coordinates_image_to_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
convert_coordinates_window_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
get_contour_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
get_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
get_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1280
get_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1280
get_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
get_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
get_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
get_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
get_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
get_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
get_rgba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
get_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
get_window_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
query_all_colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
query_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
query_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
query_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289
query_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
query_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
query_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
set_contour_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
set_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
set_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
set_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297
set_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1298
set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1299
set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1300
set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
set_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
set_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
set_rgba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305
set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306
set_window_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
13.8 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309
disp_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309
get_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1312
get_font_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
get_string_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
get_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
new_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
query_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316
read_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316
read_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317
set_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1318
set_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
write_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1320
13.9 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
copy_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
dump_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1324
dump_window_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326
flush_buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326
get_disp_object_model_3d_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
get_os_window_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1328
get_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
get_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
get_window_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1331
get_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1332
new_extern_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1335
query_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1338
set_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
set_window_dc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
set_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
unproject_coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1342
update_window_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1343

14 Identification 1347
14.1 Bar Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
clear_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
create_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
decode_bar_code_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1351
deserialize_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
find_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353
get_bar_code_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356
get_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1358
get_bar_code_param_specific . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
get_bar_code_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
query_bar_code_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
read_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
serialize_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
set_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
set_bar_code_param_specific . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
write_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
14.2 Data Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
clear_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
create_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
deserialize_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
find_data_code_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
get_data_code_2d_objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
get_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
get_data_code_2d_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405
query_data_code_2d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1422
read_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
serialize_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
set_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
write_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435

15 Image 1437
15.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
get_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
get_grayval_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443
get_grayval_interpolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
get_image_pointer1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
get_image_pointer1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
get_image_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
get_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
get_image_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
get_image_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
15.2 Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
close_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
get_framegrabber_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452
get_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1453
get_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454
grab_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
grab_data_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
grab_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1457
grab_image_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
grab_image_start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459
info_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1461
open_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
set_framegrabber_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465
set_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
set_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
15.3 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
access_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
append_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1469
channels_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
compose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
compose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1471
compose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1472
compose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1473
compose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1473
compose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474
count_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
decompose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476
decompose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477
decompose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478
decompose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479
decompose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479
decompose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1480
image_to_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
15.4 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
copy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
gen_image1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
gen_image1_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
gen_image1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
gen_image3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487
gen_image3_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1489
gen_image_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1491
gen_image_gray_ramp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493
gen_image_interleaved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494
gen_image_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496
gen_image_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497
gen_image_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499
interleave_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1501
region_to_bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503
region_to_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1504
region_to_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505
15.5 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506
add_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506
change_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506
full_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507
get_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1508
rectangle1_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1508
reduce_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
15.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1510
area_center_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1510
cooc_feature_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1511
cooc_feature_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1512
elliptic_axis_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1513
entropy_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1514
estimate_noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515
fit_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517
fit_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1519
fuzzy_entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1520
fuzzy_perimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1521
gen_cooc_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1522
gray_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1524
gray_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525
gray_histo_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1526
gray_histo_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527
gray_projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1528
histo_2dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1529
intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1530
min_max_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1531
moments_gray_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1533
plane_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
select_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535
shape_histo_all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537
shape_histo_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1538
15.7 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539
add_image_border . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539
change_format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1540
crop_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
crop_domain_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
crop_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1542
crop_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1543
crop_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1544
tile_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546
tile_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1547
tile_images_offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548
15.8 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1550
overpaint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1550
overpaint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1551
paint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
paint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1553
paint_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1554
set_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1556
15.9 Type Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1557
complex_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1557
convert_image_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1558
real_to_complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1558
real_to_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559
vector_field_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559

16 Inspection 1561
16.1 Bead Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
apply_bead_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
clear_bead_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1562
create_bead_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563
get_bead_inspection_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565
set_bead_inspection_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1566
16.2 OCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1567
close_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1567
create_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1568
deserialize_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1569
do_ocv_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1570
read_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1571
serialize_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1572
traind_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1572
write_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1573
16.3 Structured Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574
clear_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1575
create_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1576
decode_structured_light_pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1577
deserialize_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1578
gen_structured_light_pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1579
get_structured_light_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1583
get_structured_light_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1585
read_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1586
reconstruct_surface_structured_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1587
serialize_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1588
set_structured_light_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589
write_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1593
16.4 Texture Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1594
add_texture_inspection_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1598
apply_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1599
clear_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1600
clear_texture_inspection_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1601
create_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1602
deserialize_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1603
get_texture_inspection_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605
get_texture_inspection_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605
get_texture_inspection_result_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607
read_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1608
remove_texture_inspection_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
serialize_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611
set_texture_inspection_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1612
train_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616
write_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1618
16.5 Variation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1619
clear_train_data_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1619
clear_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1620
compare_ext_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1620
compare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1622
create_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1623
deserialize_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
get_thresh_images_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
get_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1626
prepare_direct_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1626
prepare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1628
read_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629
serialize_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1630
train_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
write_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632

17 Legacy 1633
17.1 2D Metrology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
copy_metrology_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
transform_metrology_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1634
17.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
clear_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
close_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
create_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
descript_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
deserialize_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
enquire_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
enquire_reject_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
get_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1640
learn_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1641
learn_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642
read_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
read_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
serialize_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
set_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
test_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
write_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647
17.3 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1648
ifelse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1648
17.4 DL Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1648
apply_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
clear_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652
clear_dl_classifier_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
clear_dl_classifier_train_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
deserialize_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1654
get_dl_classifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
get_dl_classifier_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656
get_dl_classifier_train_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657
read_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
serialize_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1660
set_dl_classifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1661
train_dl_classifier_batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1663
write_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
17.5 Develop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
dev_map_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
dev_map_prog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
dev_map_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
dev_unmap_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
dev_unmap_prog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1668
dev_unmap_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1668
17.6 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1669
gauss_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1669
polar_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1670
17.7 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
clear_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
disp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1672
disp_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1673
get_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1674
get_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
get_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
get_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676
get_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676
get_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1677
get_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
get_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
move_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
open_textwindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
query_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
query_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
set_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
set_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686
set_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1687
set_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1688
set_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1688
set_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689
set_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1690
set_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1691
slide_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1692
write_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
17.8 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
add_sample_identifier_preparation_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696
add_sample_identifier_training_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1698
apply_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1699
clear_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
create_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1702
deserialize_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
get_sample_identifier_object_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
get_sample_identifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1706
prepare_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1707
read_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
remove_sample_identifier_preparation_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
remove_sample_identifier_training_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1711
serialize_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
set_sample_identifier_object_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
set_sample_identifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1713
train_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715
write_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1716
17.9 Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
adapt_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
best_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
best_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719
best_match_pre_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1721
best_match_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1722
best_match_rot_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1723
clear_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725
create_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725
create_template_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1727
deserialize_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1729
fast_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1729
fast_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1730
read_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
serialize_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1732
set_offset_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
set_reference_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
write_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
17.10 Matching, Component-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
clear_all_component_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
clear_all_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
clear_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
clear_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
cluster_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1737
create_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1738
create_trained_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1741
deserialize_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
deserialize_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1744
find_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1745
gen_initial_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1750
get_component_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1752
get_component_model_tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1753
get_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1755
get_found_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1756
get_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1758
inspect_clustered_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
modify_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1760
read_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
read_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
serialize_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
serialize_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
train_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1764
write_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1768
write_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
17.11 Morphology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
closing_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
dilation_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1770
dilation_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1772
erosion_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1773
erosion_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1774
fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1775
gen_struct_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1776
golay_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1777
hit_or_miss_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1780
hit_or_miss_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1781
morph_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1782
morph_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1784
morph_skiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1785
opening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1786
opening_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1787
thickening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1788
thickening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1789
thickening_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1790
thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1792
thinning_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1793
thinning_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1794
17.12 OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1795
close_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1795
create_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1796
create_text_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1799
deserialize_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1799
do_ocr_multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1800
do_ocr_single . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1801
info_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1801
ocr_change_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1802
ocr_get_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1803
read_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1804
serialize_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1804
testd_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1805
traind_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1806
trainf_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
write_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
17.13 Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808
get_region_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808
hamming_change_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1809
interjacent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1810
17.14 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1812
bin_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1812
class_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1812
expand_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1813
learn_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1815
17.15 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1816
approx_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1816
approx_chain_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1820
clear_all_bar_code_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1821
clear_all_barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1822
clear_all_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1822
clear_all_camera_setup_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1822
clear_all_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1823
clear_all_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1823
clear_all_class_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1824
clear_all_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1824
clear_all_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825
clear_all_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825
clear_all_color_trans_luts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826
clear_all_conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826
clear_all_data_code_2d_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826
clear_all_deformable_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1827
clear_all_descriptor_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1827
clear_all_events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1828
clear_all_lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1828
clear_all_matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829
clear_all_metrology_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829
clear_all_mutexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829
clear_all_ncc_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1830
clear_all_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1830
clear_all_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1831
clear_all_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1831
clear_all_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1832
clear_all_sample_identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1832
clear_all_scattered_data_interpolators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1832
clear_all_serialized_items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1833
clear_all_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1833
clear_all_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1834
clear_all_sheet_of_light_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1834
clear_all_stereo_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835
clear_all_surface_matching_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835
clear_all_surface_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
clear_all_templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
clear_all_text_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
clear_all_text_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1837
clear_all_variation_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1837
close_all_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1838
close_all_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1838
close_all_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1839
close_all_framegrabbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1839
close_all_measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1839
close_all_ocrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1840
close_all_ocvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1840
close_all_serials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1841
close_all_sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1841
distance_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1841
filter_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1842
intersection_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846
partition_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1847
read_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1849
select_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1851
select_lines_longest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1853
update_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1854
17.16 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
union_straight_contours_histo_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856

18 Matching 1859
18.1 Correlation-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1859
clear_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1859
create_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1859
deserialize_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1861
determine_ncc_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1862
find_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1863
find_ncc_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1867
get_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1871
get_ncc_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1872
get_ncc_model_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1873
read_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1873
serialize_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1874
set_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1875
set_ncc_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1875
write_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1876
18.2 Deep Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1877
apply_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1878
create_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1879
get_deep_counting_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1880
prepare_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1882
read_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1883
set_deep_counting_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1884
write_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1884
18.3 Deformable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1885
clear_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1885
create_local_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1886
create_local_deformable_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1888
create_planar_calib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1890
create_planar_calib_deformable_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1893
create_planar_uncalib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1895
create_planar_uncalib_deformable_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . 1899
deserialize_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1902
determine_deformable_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1903
find_local_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1906
find_planar_calib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1908
find_planar_uncalib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1910
get_deformable_model_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1915
get_deformable_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1916
get_deformable_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1916
read_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1918
serialize_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1918
set_deformable_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1919
set_deformable_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1920
set_local_deformable_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1921
set_planar_calib_deformable_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1922
set_planar_uncalib_deformable_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 1923
write_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
18.4 Descriptor-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
clear_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
create_calib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1926
create_uncalib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1928
deserialize_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1931
find_calib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1931
find_uncalib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1933
get_descriptor_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936
get_descriptor_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936
get_descriptor_model_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1937
get_descriptor_model_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1938
read_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1939
serialize_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1940
set_descriptor_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1941
write_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1941
18.5 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1942
adapt_shape_model_high_noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1942
clear_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1943
create_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1943
create_aniso_shape_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1948
create_generic_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953
create_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953
create_scaled_shape_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1958
create_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1962
create_shape_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
deserialize_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1970
determine_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1970
find_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1973
find_aniso_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979
find_generic_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1986
find_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1988
find_scaled_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1993
find_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2000
find_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2005
get_generic_shape_model_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2012
get_generic_shape_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2012
get_generic_shape_model_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2014
get_generic_shape_model_result_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2016
get_shape_model_clutter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017
get_shape_model_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017
get_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2018
get_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2019
inspect_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2020
read_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2021
serialize_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2022
set_generic_shape_model_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2023
set_generic_shape_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2024
set_shape_model_clutter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2036
set_shape_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2039
set_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
set_shape_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2042
train_generic_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
write_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2044

19 Matrix 2045
19.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2045
get_diagonal_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2045
get_full_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046
get_sub_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
get_value_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2048
set_diagonal_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2049
set_full_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2052
set_sub_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2053
set_value_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2054
19.2 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055
abs_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055
abs_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2056
add_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2057
add_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2058
div_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2059
div_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2060
invert_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2061
invert_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2063
mult_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2065
mult_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2066
mult_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2067
mult_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2069
pow_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2071
pow_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2072
pow_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2073
pow_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2074
pow_scalar_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2076
pow_scalar_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2077
scale_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2078
scale_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2079
solve_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2080
sqrt_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2082
sqrt_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2082
sub_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2083
sub_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2084
transpose_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2085
transpose_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2086
19.3 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2087
clear_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2087
copy_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2087
create_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2088
repeat_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2090
19.4 Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2091
decompose_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2091
orthogonal_decompose_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2093
svd_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2097
19.5 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2099
eigenvalues_general_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2099
eigenvalues_symmetric_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2100
generalized_eigenvalues_general_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2101
generalized_eigenvalues_symmetric_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2103
19.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2104
determinant_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2104
get_size_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2105
max_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2106
mean_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107
min_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2109
norm_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2110
sum_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2111
19.7 File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
deserialize_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
read_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
serialize_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2114
write_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2114

20 Morphology 2117
20.1 Gray Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2117
dual_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2119
gen_disc_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2121
gray_bothat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2122
gray_closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2123
gray_closing_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2124
gray_closing_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2125
gray_dilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2126
gray_dilation_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2127
gray_dilation_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2128
gray_erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2129
gray_erosion_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2130
gray_erosion_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2131
gray_opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2132
gray_opening_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2133
gray_opening_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2134
gray_range_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2135
gray_tophat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2136
read_gray_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2137
20.2 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2138
bottom_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2140
boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2141
closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2143
closing_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2144
closing_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2146
dilation1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2147
dilation2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2148
dilation_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2150
dilation_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2151
erosion1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2153
erosion2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2154
erosion_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2155
erosion_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2157
hit_or_miss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2158
minkowski_add1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2159
minkowski_add2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2161
minkowski_sub1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2162
minkowski_sub2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2164
opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2165
opening_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2166
opening_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2167
pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2168
top_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2169

21 OCR 2171
21.1 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2171
clear_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2171
deserialize_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2171
do_ocr_multi_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2172
do_ocr_single_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2173
do_ocr_word_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2174
get_params_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2176
query_params_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2177
read_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2177
serialize_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2178
21.2 Deep OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2179
apply_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2183
create_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2185
get_deep_ocr_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2186
read_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2191
set_deep_ocr_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2192
write_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2193
21.3 K-Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2193
clear_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2193
create_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2194
deserialize_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2197
do_ocr_multi_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2198
do_ocr_single_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2199
do_ocr_word_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2200
get_features_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2201
get_params_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2202
read_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2203
select_feature_set_trainf_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2204
serialize_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2205
trainf_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2206
write_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2207
21.4 Lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2208
clear_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2208
create_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2209
import_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2209
inspect_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2210
lookup_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2210
suggest_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2211
21.5 Neural Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2212
clear_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2212
create_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2212
deserialize_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2216
do_ocr_multi_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2217
do_ocr_single_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2217
do_ocr_word_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2218
get_features_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2220
get_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2221
get_prep_info_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2222
get_regularization_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2224
get_rejection_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2224
read_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2225
select_feature_set_trainf_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
select_feature_set_trainf_mlp_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
serialize_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2229
set_regularization_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2230
set_rejection_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2232
trainf_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2233
trainf_ocr_class_mlp_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2234
write_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
21.6 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
clear_text_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
clear_text_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
create_text_model_reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
find_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2239
get_text_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2240
get_text_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
get_text_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2242
segment_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2244
select_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2246
set_text_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2249
text_line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2254
text_line_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255
21.7 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257
clear_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257
create_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257
deserialize_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2261
do_ocr_multi_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2261
do_ocr_single_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2262
do_ocr_word_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2263
get_features_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265
get_params_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265
get_prep_info_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2266
get_support_vector_num_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2268
get_support_vector_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2269
read_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2269
reduce_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2270
select_feature_set_trainf_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2271
select_feature_set_trainf_svm_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2273
serialize_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2274
trainf_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2275
trainf_ocr_class_svm_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2276
write_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2277
21.8 Training Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2278
append_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2278
concat_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2279
protect_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2280
read_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2281
read_ocr_trainf_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2282
read_ocr_trainf_names_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2282
read_ocr_trainf_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2283
write_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2284
write_ocr_trainf_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2284

22 Object 2287
22.1 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2288
compare_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2288
count_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2289
get_channel_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2289
get_obj_class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2290
test_equal_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2291
22.2 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
concat_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
copy_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2293
gen_empty_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2295
insert_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2295
integer_to_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2296
obj_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2297
obj_to_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2297
remove_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2298
replace_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2299
select_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2300

23 Regions 2303
23.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2303
get_region_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2303
get_region_convex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2304
get_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2304
get_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2305
get_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2306
23.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2307
gen_checker_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2307
gen_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2308
gen_circle_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2310
gen_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2312
gen_ellipse_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2313
gen_empty_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315
gen_grid_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315
gen_random_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2317
gen_random_regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2318
gen_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2320
gen_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2321
gen_region_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2323
gen_region_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2323
gen_region_hline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2324
gen_region_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2325
gen_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2326
gen_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2327
gen_region_polygon_filled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2328
gen_region_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2329
gen_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2330
label_to_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2331
23.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2332
area_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2339
area_holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2340
circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2341
compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2342
connect_and_holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2343
contlength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2344
convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2345
diameter_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2346
eccentricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2347
elliptic_axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2348
euler_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2349
find_neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2350
get_region_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2351
get_region_thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2352
hamming_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2352
hamming_distance_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
height_width_ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
inner_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
inner_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
moments_region_2nd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
moments_region_2nd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2359
moments_region_2nd_rel_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
moments_region_3rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2361
moments_region_3rd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2361
moments_region_central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2362
moments_region_central_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2363
orientation_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2364
rectangularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2365
region_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2366
roundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2369
runlength_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2370
runlength_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
select_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
select_region_spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
select_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374
select_shape_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2377
select_shape_std . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2379
smallest_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2380
smallest_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2382
smallest_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2383
spatial_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2384
23.4 Geometric Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2386
affine_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2386
mirror_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2387
move_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2388
polar_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2389
polar_trans_region_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2391
projective_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2393
transpose_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2394
zoom_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396
23.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396
complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396
difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2397
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2398
symm_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2399
union1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2400
union2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2401
23.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2401
test_equal_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2401
test_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2402
test_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2403
test_subset_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2404
23.7 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2405
background_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2405
clip_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2406
clip_region_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2407
closest_point_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2408
connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2410
distance_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2411
eliminate_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2412
expand_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2413
fill_up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2415
fill_up_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2415
junctions_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2416
merge_regions_line_scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2417
partition_dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418
partition_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2419
rank_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2420
remove_noise_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2421
shape_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2422
skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2423
sort_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2424
split_skeleton_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2426
split_skeleton_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2427

24 Segmentation 2429
24.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2429
add_samples_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2429
add_samples_image_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2430
add_samples_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2431
add_samples_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2432
class_2dim_sup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2433
class_2dim_unsup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2435
class_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2437
classify_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2439
classify_image_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2440
classify_image_class_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2441
classify_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2442
classify_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2444
learn_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2445
24.2 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2446
detect_edge_segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2446
hysteresis_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2448
nonmax_suppression_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2449
nonmax_suppression_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2450
24.3 Maximally Stable Extremal Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2451
segment_image_mser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2451
24.4 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2456
expand_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2456
expand_gray_ref . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2458
regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2460
regiongrowing_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2461
regiongrowing_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2462
24.5 Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2467
auto_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2473
binary_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2474
char_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2475
check_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2477
dual_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2478
dyn_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2480
fast_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2482
histo_to_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2483
local_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2484
threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2486
threshold_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2487
var_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2488
zero_crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2493
zero_crossing_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2494
24.6 Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2495
critical_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2495
local_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2496
local_max_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2497
local_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2498
local_min_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2500
lowlands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2501
lowlands_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2502
plateaus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2503
plateaus_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2504
pouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2505
saddle_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2507
watersheds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2508
watersheds_marker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2509
watersheds_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2511

25 System 2513
25.1 Compute Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2513
activate_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2513
deactivate_all_compute_devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2514
deactivate_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2514
get_compute_device_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2515
get_compute_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2515
init_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2516
open_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2517
query_available_compute_devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2518
release_all_compute_devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2519
release_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2519
set_compute_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2520
25.2 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
count_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
get_modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2523
reset_obj_db . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2523
25.3 Encrypted Item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2524
read_encrypted_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2524
write_encrypted_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2525
25.4 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
get_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
get_error_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
get_extended_error_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2527
get_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528
query_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528
set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2529
set_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2530
25.5 I/O Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2532
close_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2532
close_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2533
control_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2533
control_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2534
control_io_interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2534
get_io_channel_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2535
get_io_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2536
open_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2537
open_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2538
query_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2539
query_io_interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2540
read_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2541
set_io_channel_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2542
set_io_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2542
write_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2543
25.6 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2544
get_chapter_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2544
get_keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2545
get_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2545
get_operator_name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2547
get_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2547
get_param_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2549
get_param_num . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2550
get_param_types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2551
query_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2552
query_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2552
search_operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2553
25.7 Memory Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2553
compare_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2553
create_memory_block_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2554
create_memory_block_extern_copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2555
get_memory_block_ptr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2556
read_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2556
write_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2557
25.8 Multithreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2558
broadcast_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2558
clear_barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2558
clear_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2559
clear_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2559
clear_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2560
clear_message_queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2561
clear_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2562
create_barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2562
create_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2563
create_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2564
create_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2565
create_message_queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2566
create_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2567
dequeue_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2568
enqueue_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2569
get_current_hthread_id . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2570
get_message_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2571
get_message_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2572
get_message_queue_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2573
get_message_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2574
get_threading_attrib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2575
interrupt_operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2576
lock_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2577
read_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2578
set_message_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2578
set_message_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2579
set_message_queue_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2581
set_message_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2582
signal_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2583
signal_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2584
timed_wait_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2584
try_lock_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2585
try_wait_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2586
unlock_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2586
wait_barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2587
wait_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2587
wait_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2588
write_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2588
25.9 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2589
count_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2589
get_system_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2590
system_call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2591
wait_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2591
25.10 Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2592
get_aop_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2592
optimize_aop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2593
query_aop_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2595
read_aop_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2596
set_aop_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2597
write_aop_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2599
25.11 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2600
get_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2600
get_system_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2604
set_operator_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2605
set_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2606
25.12 Serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2620
clear_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2620
close_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2621
get_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2621
open_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2622
read_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2623
set_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2623
write_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
25.13 Serialized Item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
clear_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
create_serialized_item_ptr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2626
decrypt_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2627
encrypt_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2628
fread_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2628
fwrite_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2629
get_serialized_item_ptr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2630
25.14 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2630
close_socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2630
get_next_socket_data_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2631
get_socket_descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2631
get_socket_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2632
open_socket_accept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2633
open_socket_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2635
receive_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2636
receive_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2637
receive_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2638
receive_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2638
receive_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2639
receive_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2639
send_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2640
send_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2641
send_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2642
send_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2642
send_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2643
send_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2644
set_socket_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2645
socket_accept_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2645

26 Tools 2647
26.1 Background Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2647
close_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2647
create_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2648
get_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2650
give_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2652
run_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2653
set_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2654
update_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2656
26.2 Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2657
abs_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2657
compose_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2658
create_funct_1d_array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2658
create_funct_1d_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2659
derivate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2660
funct_1d_to_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2661
get_pair_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2661
get_y_value_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2661
integrate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2662
invert_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2663
local_min_max_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2663
match_funct_1d_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2664
negate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2665
num_points_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666
read_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666
sample_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666
scale_y_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2667
smooth_funct_1d_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2668
smooth_funct_1d_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2668
transform_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2669
write_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2670
x_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2670
y_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2671
zero_crossings_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2671
26.3 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2672
angle_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2672
angle_lx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2673
apply_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2674
area_intersection_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2675
clear_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2676
create_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2677
deserialize_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2679
distance_cc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2679
distance_cc_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2680
distance_cc_min_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2681
distance_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2682
distance_lc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2684
distance_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2684
distance_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2685
distance_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2686
distance_point_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2687
distance_point_pluecker_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2688
distance_pp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2689
distance_pr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2690
distance_ps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2691
distance_rr_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2692
distance_rr_min_dil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2693
distance_sc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2693
distance_sl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2694
distance_sr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2695
distance_ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2696
get_distance_transform_xld_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2698
get_distance_transform_xld_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2698
get_points_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2699
intersection_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2700
intersection_circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2701
intersection_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2703
intersection_line_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2704
intersection_line_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2705
intersection_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2705
intersection_segment_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2706
intersection_segment_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2707
intersection_segment_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2708
intersection_segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2709
pluecker_line_to_point_direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2710
pluecker_line_to_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2711
point_direction_to_pluecker_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2712
points_to_pluecker_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2713
projection_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2714
read_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2715
serialize_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2716
set_distance_transform_xld_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2717
write_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2718
26.4 Grid Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2718
connect_grid_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2718
create_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2720
find_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2720
gen_arbitrary_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2721
gen_grid_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2723
26.5 Hough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2724
hough_circle_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2724
hough_circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2725
hough_line_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2726
hough_line_trans_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2727
hough_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2728
hough_lines_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2729
select_matching_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2731
26.6 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2732
clear_scattered_data_interpolator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2732
create_scattered_data_interpolator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2733
interpolate_scattered_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2734
interpolate_scattered_data_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2734
interpolate_scattered_data_points_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2736
26.7 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2737
line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2737
line_position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2738
26.8 Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2739
adjust_mosaic_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2739
bundle_adjust_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2742
gen_bundle_adjusted_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2745
gen_cube_map_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2746
gen_projective_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2748
gen_spherical_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2750
proj_match_points_distortion_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2752
proj_match_points_distortion_ransac_guided . . . . . . . . . . . . . . . . . . . . . . . . . . . 2756
proj_match_points_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2760
proj_match_points_ransac_guided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2762

27 Transformations 2767
27.1 2D Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2767
affine_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2771
affine_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2773
deserialize_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2774
hom_mat2d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2774
hom_mat2d_determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2775
hom_mat2d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2776
hom_mat2d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2777
hom_mat2d_reflect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2777
hom_mat2d_reflect_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2779
hom_mat2d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2780
hom_mat2d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2781
hom_mat2d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2783
hom_mat2d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2784
hom_mat2d_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2785
hom_mat2d_slant_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2787
hom_mat2d_to_affine_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2788
hom_mat2d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2789
hom_mat2d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2791
hom_mat2d_transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2792
hom_mat3d_project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2792
hom_vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2794
point_line_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2796
projective_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2800
projective_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2801
serialize_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2802
vector_angle_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2802
vector_field_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2804
vector_to_aniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2804
vector_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2805
vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2807
vector_to_proj_hom_mat2d_distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2809
vector_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2811
vector_to_similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2812
27.2 3D Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2813
affine_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2813
deserialize_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2815
hom_mat3d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2815
hom_mat3d_determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2816
hom_mat3d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817
hom_mat3d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817
hom_mat3d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2818
hom_mat3d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2820
hom_mat3d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2822
hom_mat3d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2823
hom_mat3d_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2825
hom_mat3d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2826
hom_mat3d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2827
hom_mat3d_transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2828
point_pluecker_line_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2829
pose_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2830
projective_trans_hom_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2831
projective_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2832
serialize_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2833
vector_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2834
27.3 Dual Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2835
deserialize_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2835
dual_quat_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2836
dual_quat_conjugate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2837
dual_quat_interpolate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2838
dual_quat_normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2839
dual_quat_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2839
dual_quat_to_screw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2840
dual_quat_trans_line_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2841
dual_quat_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2843
screw_to_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2844
serialize_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2845
27.4 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2846
convert_point_3d_cart_to_spher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2846
convert_point_3d_spher_to_cart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2847
27.5 Poses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2849
convert_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2849
create_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2850
deserialize_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2854
dual_quat_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
get_circle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
get_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2857
get_rectangle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2857
pose_average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2861
pose_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2862
pose_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2862
pose_to_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2863
pose_to_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2864
proj_hom_mat2d_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2864
quat_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2865
read_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2866
serialize_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2867
set_origin_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2867
vector_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2868
write_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2871
27.6 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2873
axis_angle_to_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2873
deserialize_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2874
quat_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2874
quat_conjugate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2875
quat_interpolate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2875
quat_normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2876
quat_rotate_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2877
quat_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2878
serialize_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2878

28 Tuple 2881
28.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2881
tuple_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2881
tuple_acos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2881
tuple_acosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2882
tuple_add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2883
tuple_asin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2883
tuple_asinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2884
tuple_atan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2885
tuple_atan2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2885
tuple_atanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2886
tuple_cbrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2887
tuple_ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2887
tuple_cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2888
tuple_cosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2888
tuple_cumul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2889
tuple_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2890
tuple_div . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2890
tuple_erf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2891
tuple_erfc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2891
tuple_exp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2892
tuple_exp10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2893
tuple_exp2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2893
tuple_fabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2894
tuple_floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2894
tuple_fmod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2895
tuple_hypot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2896
tuple_ldexp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2896
tuple_lgamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2897
tuple_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2898
tuple_log10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2898
tuple_log2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2899
tuple_max2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2900
tuple_min2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2900
tuple_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2901
tuple_mult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2902
tuple_neg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2902
tuple_pow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2903
tuple_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2903
tuple_sgn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2904
tuple_sin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2905
tuple_sinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2905
tuple_sqrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2906
tuple_sub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2906
tuple_tan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2907
tuple_tanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2908
tuple_tgamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2908
28.2 Bit Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2909
tuple_band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2909
tuple_bnot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2910
tuple_bor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2910
tuple_bxor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2911
tuple_lsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2912
tuple_rsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2912
28.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2913
tuple_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2913
tuple_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2914
tuple_greater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2914
tuple_greater_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2915
tuple_greater_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2916
tuple_greater_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2916
tuple_less . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2917
tuple_less_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2918
tuple_less_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2919
tuple_less_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2919
tuple_not_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2920
tuple_not_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2921
28.4 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2922
handle_to_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2922
integer_to_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2922
tuple_chr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2923
tuple_chrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2924
tuple_int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2925
tuple_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2925
tuple_ord . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2926
tuple_ords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2927
tuple_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2928
tuple_round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2928
tuple_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2929
28.5 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2931
clear_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2931
tuple_concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2932
tuple_constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2933
tuple_gen_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2933
tuple_gen_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2934
tuple_rand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2935
tuple_repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2935
tuple_repeat_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2936
28.6 Data Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2937
copy_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2937
create_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2938
dict_to_json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2939
get_dict_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2940
get_dict_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2941
get_dict_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2942
json_to_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2944
read_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2944
remove_dict_key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2946
set_dict_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2947
set_dict_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2948
set_dict_tuple_at . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2949
write_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2951
28.7 Element Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2952
tuple_inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2952
tuple_sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2953
tuple_sort_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2953
28.8 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2954
get_handle_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2954
get_handle_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2955
get_handle_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2955
tuple_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2956
tuple_histo_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2957
tuple_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2958
tuple_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2959
tuple_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2959
tuple_median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2960
tuple_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2960
tuple_sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2961
28.9 Logical Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2962
tuple_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2962
tuple_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2962
tuple_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2963
tuple_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
28.10 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
tuple_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
tuple_remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2965
tuple_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2966
28.11 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967
tuple_find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967
tuple_find_first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2968
tuple_find_last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2968
tuple_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2969
tuple_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2970
tuple_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2970
tuple_select_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2971
tuple_select_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2972
tuple_select_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2973
tuple_str_bit_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2973
tuple_uniq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2974
28.12 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2975
tuple_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2975
tuple_intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2976
tuple_symmdiff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2977
tuple_union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2977
28.13 String Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2978
tuple_environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
tuple_join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
tuple_regexp_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2980
tuple_regexp_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2983
tuple_regexp_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2984
tuple_regexp_test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2985
tuple_split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2986
tuple_str_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2987
tuple_str_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2988
tuple_str_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2989
tuple_str_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2990
tuple_strchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2990
tuple_strlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2991
tuple_strrchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2992
tuple_strrstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2993
tuple_strstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2994
tuple_substr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2995
28.14 Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2996
tuple_is_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2996
tuple_is_handle_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2997
tuple_is_int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2997
tuple_is_int_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2998
tuple_is_mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2999
tuple_is_nan_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3000
tuple_is_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3000
tuple_is_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3001
tuple_is_real_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3002
tuple_is_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3003
tuple_is_string_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3004
tuple_is_valid_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3005
tuple_sem_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3005
tuple_sem_type_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3006
tuple_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3007
tuple_type_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3008
29 XLD 3011
29.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
get_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
get_lines_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
get_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3012
get_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3013
29.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3014
gen_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3014
gen_contour_nurbs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3015
gen_contour_polygon_rounded_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3017
gen_contour_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3018
gen_contour_region_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3018
gen_contours_skeleton_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3020
gen_cross_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3021
gen_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3021
gen_nurbs_interp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3023
gen_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3024
gen_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3025
gen_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026
mod_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027
29.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3028
area_center_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3028
area_center_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3029
circularity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3030
compactness_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3031
contour_point_num_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3032
convexity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3032
diameter_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3033
dist_ellipse_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3034
dist_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3035
dist_rectangle2_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3037
eccentricity_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3038
eccentricity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3039
elliptic_axis_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3040
elliptic_axis_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3041
fit_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3042
fit_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3044
fit_line_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3047
fit_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3049
get_contour_angle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3052
get_contour_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3052
get_contour_global_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3056
get_regress_params_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3059
height_width_ratio_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3061
info_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3062
length_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3062
local_max_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3063
max_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3064
moments_any_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3064
moments_any_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3066
moments_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3068
moments_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3068
orientation_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3069
orientation_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3070
query_contour_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3071
query_contour_global_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3072
rectangularity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3072
select_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3073
select_shape_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3074
select_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3077
smallest_circle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3077
smallest_rectangle1_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3078
smallest_rectangle2_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3079
test_closed_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3080
test_self_intersection_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3081
test_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3081
29.4 Geometric Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3082
affine_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3082
affine_trans_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3083
gen_parallel_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3084
polar_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3085
polar_trans_contour_xld_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3087
projective_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3089
29.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3090
difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3090
difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3091
intersection_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3092
intersection_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3093
intersection_region_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3094
symm_difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3095
symm_difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3096
union2_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3097
union2_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3098
29.6 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3100
add_noise_white_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3100
clip_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3101
clip_end_points_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3101
close_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3102
combine_roads_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3103
crop_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3104
merge_cont_line_scan_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3105
regress_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3106
segment_contour_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3107
segment_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3109
shape_trans_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3111
smooth_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112
sort_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112
split_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3113
union_adjacent_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3114
union_cocircular_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3116
union_collinear_contours_ext_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3119
union_collinear_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3121
union_cotangential_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3125
union_straight_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3129

Index 3131
Chapter 1

1D Measuring

This chapter contains operators for 1D measuring.


Concept of 1D measuring
With 1D measuring, edges, i.e., transitions from light to dark or from dark to light, can be located along a predefined
line or arc. This allows you to measure the dimension of parts fast and easily with high accuracy. Note that if you
want to measure the dimensions of geometric primitives like circles, ellipses, rectangles, or lines, and approximate
values for the positions, orientations, and geometric shapes are known, 2D Metrology may be a suitable alternative.

(1) (2)
Measure edges and the distances between them along a line (1) or along an arc (2). These images are from the
example programs fuzzy_measure_pin.hdev and measure_ring.hdev.

In the following, the steps that are required to use 1D measuring are described briefly.

Generate measure object: First, a measure object must be generated that describes the region of interest for the
measurement. If the measurement should be performed along a line, the measure object is defined by a
rectangle. If it should be performed along an arc, the measure object is defined as an annular arc. The
measure objects are generated by the operators

• gen_measure_rectangle2 or
• gen_measure_arc.

Note that you can use shape-based matching (see chapter Matching / Shape-Based) to automatically align
the measure objects.
Perform the measurement: Then, the actual measurement is performed. For this, typically one of the following
operators is used:

1
2 CHAPTER 1 1D MEASURING

• measure_pos extracts straight edges perpendicular to the main axis of the measure object and returns
the positions of the edge centers, the edge amplitudes, and the distances between consecutive edges.
• measure_pairs extracts straight edge pairs perpendicular to the main axis of the measure object
and returns the positions of the edge centers of the edge pairs, the edge amplitudes for the edge pairs,
the distances between the edges of an edge pair, and the distances between consecutive edge pairs.
• measure_thresh extracts points with a particular gray value along the main axis of the measure
object and returns their positions and the distances between consecutive points.

Alternatively, if there are extra edges that do not belong to the measurement, fuzzy measuring can be ap-
plied. Here, so-called fuzzy rules, which describe the features of good edges, must be defined. Possible
features are, e.g., the position, the distance, the gray values, or the amplitude of edges. These functions
are created with create_funct_1d_pairs and passed to the tool with set_fuzzy_measure or
set_fuzzy_measure_norm_pair. Then, based on these rules, one of the following operators will
extract the most appropriate edges:

• fuzzy_measure_pos extracts straight edges perpendicular to the main axis of the measure object
and returns the positions of the edge centers, the edge amplitudes, the fuzzy scores, and the distances
between consecutive edges.
• fuzzy_measure_pairs extracts straight edge pairs perpendicular to the main axis of the measure
object and returns the positions of the first and second edges of the edge pairs, the edge amplitudes for
the edge pairs, the positions of the centers of the edge pairs, the fuzzy scores, the distances between
the edges of an edge pair, and the distances between consecutive edge pairs.
• fuzzy_measure_pairing is similar to fuzzy_measure_pairs with the exception that it is
also possible to extract interleaving and included pairs using the parameter Pairing.

Alternatively to the automatic extraction of edges or points within the measure object, you can also extract a
one-dimensional gray value profile perpendicular to the rectangle or annular arc and evaluate this gray value
information according to your needs. The gray value profile within the measure object can be extracted with
the operator

• measure_projection.

Destroy measure object handle: When you no longer need the measure object, you destroy it by passing the
handle to

• close_measure.

Further operators
In addition to the operators mentioned above, you can use reset_fuzzy_measure to discard a fuzzy func-
tion of a fuzzy set that was set via set_fuzzy_measure or set_fuzzy_measure_norm_pair be-
fore, translate_measure to translate the reference point of the measure object to a specified position,
write_measure and read_measure to write the measure object to file and read it from file again, and
serialize_measure and deserialize_measure to serialize and deserialize the measure object.
Glossary
In the following, the most important terms that are used in the context of 1D Measuring are described.

measure object A data structure that contains a specific region of interest that is prepared for the extraction of
straight edges which lie perpendicular to the major axis of a rectangle or an annular arc.
annular arc A circular arc with an associated width.

Further Information
See also the “Solution Guide Basics” and “Solution Guide on 1D Measuring” for further de-
tails about 1D Measuring.
Learn about 1D Measuring and many other topics in interactive online courses at our MVTec Academy .

HALCON/HDevelop Reference Manual, 2024-11-13


3

close_measure ( : : MeasureHandle : )

Delete a measure object.


close_measure deletes the measure object given by MeasureHandle. The memory used for the measure
object is freed.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
Result
If the parameter values are correct the operator close_measure returns the value 2 (H_MSG_TRUE). Otherwise
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MeasureHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, measure_pos, measure_pairs
Alternatives
clear_handle
See also
clear_handle
Module
1D Metrology

deserialize_measure ( : : SerializedItemHandle : MeasureHandle )

Deserialize a serialized measure object.


deserialize_measure deserializes a measure object, that was serialized by serialize_measure (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The serialized measure
object is defined by the handle SerializedItemHandle. The deserialized values are stored in an automati-
cally created measure object with the handle MeasureHandle.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .measure ; handle
Measure object handle.
Result
If the parameters are valid, the operator deserialize_measure returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

HALCON 24.11.1.0
4 CHAPTER 1 1D MEASURING

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_measure
Possible Successors
measure_pos, measure_pairs
See also
read_measure, write_measure
Module
1D Metrology

fuzzy_measure_pairing ( Image : : MeasureHandle, Sigma, AmpThresh,


FuzzyThresh, Transition, Pairing, NumPairs : RowEdgeFirst,
ColumnEdgeFirst, AmplitudeFirst, RowEdgeSecond, ColumnEdgeSecond,
AmplitudeSecond, RowPairCenter, ColumnPairCenter, FuzzyScore,
IntraDistance )

Extract straight edge pairs perpendicular to a rectangle or an annular arc.


fuzzy_measure_pairing serves to extract straight edge pairs that lie perpendicular to the major axis of a
rectangle or an annular arc.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The extraction algorithm of fuzzy_measure_pairing is identical to fuzzy_measure_pairs (see there
for details) with the exception, that it is also possible to extract interleaving and included pairs using the parameter
Pairing. Currently only ’no_restriction’ is available, which returns all possible edge pairs, allowing interleaving
and inclusion of pairs.
Only the best scored NumPairs edge pairs are returned, whereas 0 indicates to return all possible found edge
combinations.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc. The
corresponding edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond, the fuzzy scores in
FuzzyScore. In addition, the distance between each edge pair is returned in IntraDistance, corresponding
to the distance between EdgeFirst[i] and EdgeSecond[i].
Attention
fuzzy_measure_pairing only returns meaningful results if the assumptions that the edges are straight and
perpendicular to the major axis of the rectangle or annular arc are fulfilled. Thus, it should not be used to extract
edges from curved objects, for example. Furthermore, the user should ensure that the rectangle or annular arc is
as close to perpendicular as possible to the edges in the image. Additionally, Sigma must not become larger than
approx. 0.5 * Length1 (for Length1 see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pairing ignores the domain of Image for efficiency reasons.
If certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of Gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1

HALCON/HDevelop Reference Manual, 2024-11-13


5

. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real


Minimum edge amplitude.
Default: 30.0
Suggested values: AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ AmpThresh ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum fuzzy value.
Default: 0.5
Suggested values: FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.7, 0.9}
Value range: 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended increment: 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Select the first gray value transition of the edge pairs.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’}
. Pairing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Constraint of pairing.
Default: ’no_restriction’
List of values: Pairing ∈ {’no_restriction’}
. NumPairs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of edge pairs.
Default: 10
Suggested values: NumPairs ∈ {0, 1, 10, 20, 50}
Value range: 0 ≤ NumPairs
Recommended increment: 1
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the first edge.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the first edge.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the second edge.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the second edge.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the second edge (with sign).
. RowPairCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the edge pair.
. ColumnPairCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the edge pair.
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Fuzzy evaluation of the edge pair.
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between the edges of the edge pair.
Result
If the parameter values are correct the operator fuzzy_measure_pairing returns the value 2
(H_MSG_TRUE). Otherwise an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
6 CHAPTER 1 1D MEASURING

Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairs, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology

fuzzy_measure_pairs ( Image : : MeasureHandle, Sigma, AmpThresh,


FuzzyThresh, Transition : RowEdgeFirst, ColumnEdgeFirst,
AmplitudeFirst, RowEdgeSecond, ColumnEdgeSecond, AmplitudeSecond,
RowEdgeCenter, ColumnEdgeCenter, FuzzyScore, IntraDistance,
InterDistance )

Extract straight edge pairs perpendicular to a rectangle or an annular arc.


fuzzy_measure_pairs serves to extract straight edge pairs which lie perpendicular to the major axis of a
rectangle or an annular arc. In addition to measure_pairs it uses fuzzy functions to evaluate and select the
edge pairs.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The extraction algorithm of fuzzy_measure_pairs is identical to fuzzy_measure_pos. In addi-
tion, neighboring edges are grouped to pairs. To extract pairs that intersect or include each other, use
fuzzy_measure_pairing.
If Transition = ’positive’, the edge points with a dark-to-light transition in the direction of the major axis of
the rectangle or annular arc are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the cor-
responding edges with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond.
If Transition = ’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge
defines the transition for RowEdgeFirst and ColumnEdgeFirst. I.e., dependent on the positioning of the
measure object, edge pairs with a light-dark-light transition or edge pairs with a dark-light-dark transition are
returned. This is suited, e.g., to measure objects with different brightness relative to the background.
Having extracted subpixel edge locations, the edges are paired. The pairing algorithm groups the edges such that
interleavings and inclusions of pairs are prohibited. The features of an edge pair are evaluated by a fuzzy function,
which can be set by set_fuzzy_measure or set_fuzzy_measure_norm_pair. Which edge pairs are
selected can be determined with the parameter FuzzyThresh, which constitutes a threshold on the weight over
all fuzzy sets, i.e., the geometric mean of the weights of the defined fuzzy functions.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc. The
corresponding edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond, the fuzzy scores
in FuzzyScore. In addition, the distance between each edge pair is returned in IntraDistance and the
distance between consecutive edge pairs is returned in InterDistance. Here, IntraDistance[i] corresponds to
the distance between EdgeFirst[i] and EdgeSecond[i], while InterDistance[i] corresponds to the distance between
EdgeSecond[i] and EdgeFirst[i+1], i.e., the tuple InterDistance contains one element less than the tuples of
the edge pairs.
Attention
fuzzy_measure_pairs only returns meaningful results if the assumptions that the edges are straight and
perpendicular to the major axis of the rectangle or annular arc are fulfilled. Thus, it should not be used to extract
edges from curved objects, for example. Furthermore, the user should ensure that the rectangle or a annular arc is
as close to perpendicular as possible to the edges in the image. Additionally, Sigma must not become larger than
approx. 0.5 * Length1 (for Length1 see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pairs ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.

HALCON/HDevelop Reference Manual, 2024-11-13


7

Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of Gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ AmpThresh ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum fuzzy value.
Default: 0.5
Suggested values: FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.7, 0.9}
Value range: 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended increment: 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Select the first gray value transition of the edge pairs.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’}
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the first edge point.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the first edge point.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the second edge point.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the second edge point.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the second edge (with sign).
. RowEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the edge pair.
. ColumnEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the edge pair.
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Fuzzy evaluation of the edge pair.
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between edges of an edge pair.
. InterDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edge pairs.
Result
If the parameter values are correct the operator fuzzy_measure_pairs returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information

HALCON 24.11.1.0
8 CHAPTER 1 1D MEASURING

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairing, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology

fuzzy_measure_pos ( Image : : MeasureHandle, Sigma, AmpThresh,


FuzzyThresh, Transition : RowEdge, ColumnEdge, Amplitude,
FuzzyScore, Distance )

Extract straight edges perpendicular to a rectangle or an annular arc.


fuzzy_measure_pos extracts straight edges which lie perpendicular to the major axis of a rectangle or an
annular arc. In addition to measure_pos it uses fuzzy functions to evaluate and select the edges.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The algorithm of fuzzy_measure_pos works by averaging the gray values in “slices” perpendicular to the
major axis of the rectangle or annular arc in order to obtain a one-dimensional edge profile. The sampling is done
at subpixel positions in the image Image at integer row and column distances (in the coordinate frame of the
rectangle) from the center of the rectangle. Since this involves some calculations which can be used repeatedly
in several measurements, the operator gen_measure_rectangle2 is used to perform these calculations only
once, thus increasing the speed of fuzzy_measure_pos significantly. Since there is a trade-off between ac-
curacy and speed in the subpixel calculations of the gray values, and thus in the accuracy of the extracted edge
positions, different interpolation schemes can be selected in gen_measure_rectangle2. (The interpolation
only influences rectangles not aligned with the image axes and annular arcs.) The measure object generated with
gen_measure_rectangle2 is passed in MeasureHandle.
After the one-dimensional edge profile has been calculated, subpixel edge locations are computed by convolving
the profile with the derivatives of a Gaussian smoothing kernel of standard deviation Sigma. Salient edges can be
selected with the parameter AmpThresh, which constitutes a threshold on the amplitude, i.e., the absolute value of
the first derivative of the edge. Additionally, it is possible to select only positive edges, i.e., edges which constitute
a dark-to-light transition in the direction of the major axis of the rectangle (Transition = ’positive’), only
negative edges, i.e., light-to-dark transitions (Transition = ’negative’), or both types of edges (Transition
= ’all’). Finally, it is possible to select which edge points are returned.
Having extracted subpixel edge locations, features of these edges are evaluated by a corresponding fuzzy function,
which can be set by set_fuzzy_measure. Which edges are selected can be determined with the parameter
FuzzyThresh, which constitutes a threshold on the weight over all fuzzy sets, i.e., the geometric mean of the
weights of the defined sets.
The selected edges are returned as single points, which lie on the major axis of the rectangle or annular arc, in
(RowEdge, ColumnEdge). The corresponding edge amplitudes are returned in Amplitude, the fuzzy scores
in FuzzyScore. In addition, the distance between consecutive edge points is returned in Distance. Here,
Distance[i] corresponds to the distance between Edge[i] and Edge[i+1], i.e., the tuple Distance contains one
element less than the tuples RowEdge and ColumnEdge.
Attention
fuzzy_measure_pos only returns meaningful results if the assumptions that the edges are straight and per-
pendicular to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved
objects, for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible

HALCON/HDevelop Reference Manual, 2024-11-13


9

to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1
see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pos ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of Gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ AmpThresh ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum fuzzy value.
Default: 0.5
Suggested values: FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.6, 0.7, 0.9}
Value range: 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended increment: 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Select light/dark or dark/light edges.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’}
. RowEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the edge point.
. ColumnEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the edge point.
. Amplitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the edge (with sign).
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Fuzzy evaluation of the edges.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edges.
Result
If the parameter values are correct the operator fuzzy_measure_pos returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure

HALCON 24.11.1.0
10 CHAPTER 1 1D MEASURING

Possible Successors
close_measure
Alternatives
edges_sub_pix, measure_pos
See also
fuzzy_measure_pairing, fuzzy_measure_pairs, measure_pairs
Module
1D Metrology

gen_measure_arc ( : : CenterRow, CenterCol, Radius, AngleStart,


AngleExtent, AnnulusRadius, Width, Height,
Interpolation : MeasureHandle )

Prepare the extraction of straight edges perpendicular to an annular arc.


gen_measure_arc prepares the extraction of straight edges which lie perpendicular to an annular arc. Here,
annular arc denotes a circular arc with an associated width. The center of the arc is passed in the parameters
CenterRow and CenterCol, its radius in Radius, the starting angle in AngleStart, and its angular extent
relative to the starting angle in AngleExtent. If AngleExtent > 0, an arc with counterclockwise orientation
is generated, otherwise an arc with clockwise orientation. The radius of the annular arc, i.e., half its width, is
determined by AnnulusRadius.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The edge extraction algorithm is described in the documentation of the operator measure_pos. As discussed
there, different types of interpolation can be used for the calculation of the one-dimensional gray value profile. For
Interpolation = ’nearest_neighbor’, the gray values in the measurement are obtained from the gray values of
the closest pixel, i.e., by constant interpolation. For Interpolation = ’bilinear’, bilinear interpolation is used,
while for Interpolation = ’bicubic’, bicubic interpolation is used.
To perform the actual measurement at optimal speed, all computations that can be used for multiple measurements
are already performed in the operator gen_measure_arc. For this, an optimized data structure, a so-called
measure object, is constructed and returned in MeasureHandle. The size of the images in which measurements
will be performed must be specified in the parameters Width and Height.
The system parameter ’int_zooming’ (see set_system) affects the accuracy and speed of the calculations used
to construct the measure object. If ’int_zooming’ is set to ’true’, the internal calculations are performed using fixed
point arithmetic, leading to much shorter execution times. However, the geometric accuracy is slightly lower in
this mode. If ’int_zooming’ is set to ’false’, the internal calculations are performed using floating point arithmetic,
leading to the maximum geometric accuracy, but also to significantly increased execution times.
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.
Please also note that the center coordinates of the arc are rounded internally, so that the center lies on the pixel
grid. This is done to ensure consistency.
Parameters
. CenterRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Row coordinate of the center of the arc.
Default: 100.0
Suggested values: CenterRow ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ CenterRow (lin)
Minimum increment: 1.0
Recommended increment: 10.0

HALCON/HDevelop Reference Manual, 2024-11-13


11

. CenterCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer


Column coordinate of the center of the arc.
Default: 100.0
Suggested values: CenterCol ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ CenterCol (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Radius of the arc.
Default: 50.0
Suggested values: Radius ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
(lin)
Minimum increment: 1.0
Recommended increment: 10.0
Restriction: AnnulusRadius <= Radius
. AngleStart (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Start angle of the arc in radians.
Default: 0.0
Suggested values: AngleStart ∈ {-3.14159, -2.35619, -1.57080, -0.78540, 0.0, 0.78540, 1.57080,
2.35619, 3.14159}
Value range: -3.14159 ≤ AngleStart ≤ 3.14159 (lin)
Minimum increment: 0.03142
Recommended increment: 0.31416
. AngleExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Angular extent of the arc in radians.
Default: 6.28318
Suggested values: AngleExtent ∈ {-6.28318, -5.49779, -4.71239, -3.92699, -3.14159, -2.35619,
-1.57080, -0.78540, 0.78540, 1.57080, 2.35619, 3.14159, 3.92699, 4.71239, 5.49779, 6.28318}
Value range: -6.28318 ≤ AngleExtent ≤ 6.28318 (lin)
Minimum increment: 0.03142
Recommended increment: 0.31416
. AnnulusRadius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Radius (half width) of the annulus.
Default: 10.0
Suggested values: AnnulusRadius ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
(lin)
Minimum increment: 1.0
Recommended increment: 10.0
Restriction: AnnulusRadius > 0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the image to be processed subsequently.
Default: 512
Suggested values: Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
Value range: 0 ≤ Width (lin)
Minimum increment: 1
Recommended increment: 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the image to be processed subsequently.
Default: 512
Suggested values: Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Value range: 0 ≤ Height (lin)
Minimum increment: 1
Recommended increment: 16
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of interpolation to be used.
Default: ’nearest_neighbor’
List of values: Interpolation ∈ {’nearest_neighbor’, ’bilinear’, ’bicubic’}

HALCON 24.11.1.0
12 CHAPTER 1 1D MEASURING

. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .measure ; handle


Measure object handle.
Result
If the parameter values are correct, the operator gen_measure_arc returns the value 2 (H_MSG_TRUE). Oth-
erwise an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
draw_circle
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing
Alternatives
edges_sub_pix
See also
gen_measure_rectangle2
Module
1D Metrology

gen_measure_rectangle2 ( : : Row, Column, Phi, Length1, Length2,


Width, Height, Interpolation : MeasureHandle )

Prepare the extraction of straight edges perpendicular to a rectangle.


gen_measure_rectangle2 prepares the extraction of straight edges which lie perpendicular to the major
axis of a rectangle. The center of the rectangle is passed in the parameters Row and Column, the direction of
the major axis of the rectangle in Phi, and the length of the two axes, i.e., half the diameter of the rectangle, in
Length1 and Length2.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The edge extraction algorithm is described in the documentation of the operator measure_pos. As discussed
there, different types of interpolation can be used for the calculation of the one-dimensional gray value profile. For
Interpolation = ’nearest_neighbor’, the gray values in the measurement are obtained from the gray values of
the closest pixel, i.e., by constant interpolation. For Interpolation = ’bilinear’, bilinear interpolation is used,
while for Interpolation = ’bicubic’, bicubic interpolation is used.
To perform the actual measurement at optimal speed, all computations that can be used for multiple measurements
are already performed in the operator gen_measure_rectangle2. For this, an optimized data structure, a
so-called measure object, is constructed and returned in MeasureHandle. The size of the images in which
measurements will be performed must be specified in the parameters Width and Height.
The system parameter ’int_zooming’ (see set_system) affects the accuracy and speed of the calculations used
to construct the measure object. If ’int_zooming’ is set to ’true’, the internal calculations are performed using fixed
point arithmetic, leading to much shorter execution times. However, the geometric accuracy is slightly lower in
this mode. If ’int_zooming’ is set to ’false’, the internal calculations are performed using floating point arithmetic,
leading to the maximum geometric accuracy, but also to significantly increased execution times.
Attention
Note that when using bilinear or bicubic interpolation, not only the measurement rectangle but additionally the
margin around the rectangle must fit into the image. The width of the margin (in all four directions) must be at
least one pixel for bilinear interpolation and two pixels for bicubic interpolation. For projection lines that do not
fulfill this condition, no gray value is computed. Thus, no edge can be extracted at these positions.

HALCON/HDevelop Reference Manual, 2024-11-13


13

Please also note that the center coordinates of the rectangle are rounded internally, so that the center lies on the
pixel grid. This is done to ensure consistency.
Parameters
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y ; real / integer
Row coordinate of the center of the rectangle.
Default: 300.0
Suggested values: Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Row ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x ; real / integer
Column coordinate of the center of the rectangle.
Default: 200.0
Suggested values: Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Column ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad ; real / integer
Angle of longitudinal axis of the rectangle to horizontal (radians).
Default: 0.0
Suggested values: Phi ∈ {-1.178097, -0.785398, -0.392699, 0.0, 0.392699, 0.785398, 1.178097}
(lin)
Minimum increment: 0.001
Recommended increment: 0.1
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth ; real / integer
Half width of the rectangle.
Default: 100.0
Suggested values: Length1 ∈ {3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0, 300.0, 500.0}
Value range: 1.0 ≤ Length1 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight ; real / integer
Half height of the rectangle.
Default: 20.0
Suggested values: Length2 ∈ {1.0, 2.0, 3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0}
Value range: 0.0 ≤ Length2 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the image to be processed subsequently.
Default: 512
Suggested values: Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
Value range: 0 ≤ Width (lin)
Minimum increment: 1
Recommended increment: 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the image to be processed subsequently.
Default: 512
Suggested values: Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Value range: 0 ≤ Height (lin)
Minimum increment: 1
Recommended increment: 16
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of interpolation to be used.
Default: ’nearest_neighbor’
List of values: Interpolation ∈ {’nearest_neighbor’, ’bilinear’, ’bicubic’}

HALCON 24.11.1.0
14 CHAPTER 1 1D MEASURING

. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .measure ; handle


Measure object handle.
Result
If the parameter values are correct the operator gen_measure_rectangle2 returns the value 2
(H_MSG_TRUE). Otherwise an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
draw_rectangle2
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing, measure_thresh
Alternatives
edges_sub_pix
See also
gen_measure_arc
Module
1D Metrology

get_measure_param ( : : MeasureHandle,
GenParamName : GenParamValue )

Return the parameters and properties of a measure object.


The operator get_measure_param returns parameters and properties of the measure object
MeasureHandle. The names of the desired properties are passed in the generic parameter GenParamName,
the corresponding values are returned in GenParamValue.
The properties that can be passed to GenParamName depend on the kind of measure object as well as its param-
eters. If a property is not available, get_measure_param returns an error.

Properties for all measure objects

• ’type’: Type of the measure object, either ’rectangle2’ if the object was created with
gen_measure_rectangle2, or ’arc’ if it was created with gen_measure_arc.
• ’image_width’, ’image_height’: Image width and height, respectively, for which the measure object was
created.
• ’interpolation’: Used interpolation mode: ’nearest_neighbor’, ’bilinear’ or ’bicubic’.
Properties for rectangular measure objects
Properties for measure objects that were created with gen_measure_rectangle2.
• ’row’, ’column’: Row and column, respectively, of the center of the measurement rectangle.
• ’phi’: Rotation angle of the measurement rectangle.
• ’length1’, ’length2’: Side lengths of the measurement rectangle.
Properties for annular-shaped measure objects
Properties for measure objects that were created with gen_measure_arc.
• ’row’, ’column’: Row and column, respectively, of the center of the annular arc.
• ’radius’: Radius of the annular arc.

HALCON/HDevelop Reference Manual, 2024-11-13


15

• ’angle_start’, ’angle_extent’: Starting angle and angular extent of annular arc.


• ’annulus_radius’: Radius of the angular arc.
Properties for measure objects with fuzzy functions
Properties for measure objects, for which fuzzy functions have been set with set_fuzzy_measure or
set_fuzzy_measure_norm_pair.
• ’fuzzy_contrast’: Fuzzy function for evaluation of the edge amplitudes.
• ’fuzzy_position’, ’fuzzy_position_center’, ’fuzzy_position_end’, ’fuzzy_position_first_edge’,
’fuzzy_position_last_edge’: Fuzzy function for evaluation of the distance of edge candidates to
the reference point on the measure object.
• ’fuzzy_position_pair’, ’fuzzy_position_pair_center’, ’fuzzy_position_pair_end’,
’fuzzy_position_first_pair’, ’fuzzy_position_last_pair’: Fuzzy function for evaluation of of the
distance of edge pairs to the reference point on the measure object.
• ’fuzzy_size’, ’fuzzy_size_diff’, ’fuzzy_size_abs_diff’: Fuzzy function for evaluation of the distance be-
tween two edges of a pair.
• ’fuzzy_gray’: Fuzzy function for weighting the mean projected gray value between two edges of a pair.

Parameters

. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle


Measure object handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of the parameter to be returned.
Default: ’type’
List of values: GenParamName ∈ {’type’, ’image_width’, ’image_height’, ’interpolation’, ’row’, ’column’,
’phi’, ’length1’, ’length2’, ’radius’, ’angle_start’, ’angle_extent’, ’annulus_radius’, ’fuzzy_contrast’,
’fuzzy_gray’, ’fuzzy_position’, ’fuzzy_position_center’, ’fuzzy_position_end’, ’fuzzy_position_first_edge’,
’fuzzy_position_last_edge’, ’fuzzy_position_pair’, ’fuzzy_position_pair_center’, ’fuzzy_position_pair_end’,
’fuzzy_position_first_pair’, ’fuzzy_position_last_pair’, ’fuzzy_size’, ’fuzzy_size_diff’,
’fuzzy_size_abs_diff’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / string / integer
Value of the parameter.
Result
If the parameter values are correct the operator get_measure_param returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
gen_measure_rectangle2, gen_measure_arc
See also
gen_measure_rectangle2, gen_measure_arc, translate_measure
Module
1D Metrology

measure_pairs ( Image : : MeasureHandle, Sigma, Threshold,


Transition, Select : RowEdgeFirst, ColumnEdgeFirst,
AmplitudeFirst, RowEdgeSecond, ColumnEdgeSecond, AmplitudeSecond,
IntraDistance, InterDistance )

Extract straight edge pairs perpendicular to a rectangle or annular arc.

HALCON 24.11.1.0
16 CHAPTER 1 1D MEASURING

measure_pairs serves to extract straight edge pairs which lie perpendicular to the major axis of a rectangle or
annular arc.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The extraction algorithm of measure_pairs is identical to measure_pos. In addition the edges are grouped
to pairs: If Transition = ’positive’, the edge points with a dark-to-light transition in the direction of the ma-
jor axis of the rectangle are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the corre-
sponding edges with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond. If
Transition = ’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge
defines the transition for RowEdgeFirst and ColumnEdgeFirst. I.e., dependent on the positioning of the
measure object, edge pairs with a light-dark-light transition or edge pairs with a dark-light-dark transition are
returned. This is suited, e.g., to measure objects with different brightness relative to the background.
If more than one consecutive edge with the same transition is found, the first one is used as a pair element. This
behavior may cause problems in applications in which the threshold Threshold cannot be selected high enough
to suppress consecutive edges of the same transition. For these applications, a second pairing mode exists that only
selects the respective strongest edges of a sequence of consecutive rising and falling edges. This mode is selected
by appending ’_strongest’ to any of the above modes for Transition, e.g., ’negative_strongest’. Finally, it is
possible to select which edge pairs are returned. If Select is set to ’all’, all edge pairs are returned. If it is set to
’first’, only the first of the extracted edge pairs is returned, while it is set to ’last’, only the last one is returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle. The corresponding
edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond. In addition, the distance between
each edge pair is returned in IntraDistance and the distance between consecutive edge pairs is returned
in InterDistance. Here, IntraDistance[i] corresponds to the distance between EdgeFirst[i] and EdgeSec-
ond[i], while InterDistance[i] corresponds to the distance between EdgeSecond[i] and EdgeFirst[i+1], i.e., the
tuple InterDistance contains one element less than the tuples of the edge pairs.
Attention
measure_pairs only returns meaningful results if the assumptions that the edges are straight and perpendicular
to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_pairs ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ Threshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of gray value transition that determines how edges are grouped to edge pairs.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’, ’all_strongest’, ’positive_strongest’,

HALCON/HDevelop Reference Manual, 2024-11-13


17

’negative_strongest’}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of edge pairs.
Default: ’all’
List of values: Select ∈ {’all’, ’first’, ’last’}
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the first edge.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the first edge.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the second edge.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the second edge.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the second edge (with sign).
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between edges of an edge pair.
. InterDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edge pairs.
Result
If the parameter values are correct the operator measure_pairs returns the value 2 (H_MSG_TRUE). Otherwise
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairs, fuzzy_measure_pairing
See also
measure_pos, fuzzy_measure_pos
Module
1D Metrology

measure_pos ( Image : : MeasureHandle, Sigma, Threshold,


Transition, Select : RowEdge, ColumnEdge, Amplitude, Distance )

Extract straight edges perpendicular to a rectangle or annular arc.


measure_pos extracts straight edges which lie perpendicular to the major axis of a rectangle or annular arc.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The algorithm of measure_pos works by averaging the gray values in “slices” perpendicular to the major axis
of the rectangle or annular arc in order to obtain a one-dimensional edge profile. The sampling is done at sub-
pixel positions in the image Image at integer row and column distances (in the coordinate frame of the rect-
angle) from the center of the rectangle. Since this involves some calculations which can be used repeatedly in
several measurements, the operator gen_measure_rectangle2 or gen_measure_arc is used to perform

HALCON 24.11.1.0
18 CHAPTER 1 1D MEASURING

these calculations only once, thus increasing the speed of measure_pos significantly. Since there is a trade-off
between accuracy and speed in the subpixel calculations of the gray values, and thus in the accuracy of the ex-
tracted edge positions, different interpolation schemes can be selected in gen_measure_rectangle2. (The
interpolation only influences rectangles not aligned with the image axes.) The measure object generated with
gen_measure_rectangle2 is passed in MeasureHandle.
After the one-dimensional edge profile has been calculated, subpixel edge locations are computed by convolving
the profile with the derivatives of a Gaussian smoothing kernel of standard deviation Sigma. Salient edges can be
selected with the parameter Threshold, which constitutes a threshold on the amplitude values (Amplitude),
i.e., the absolute
√ value of the first derivative of the edge. Note that the amplitude values are scaled by the factor
Sigma · 2π. Additionally, it is possible to select only positive edges, i.e., edges which constitute a dark-to-light
transition in the direction of the major axis of the rectangle or the arc (Transition = ’positive’), only negative
edges, i.e., light-to-dark transitions (Transition = ’negative’), or both types of edges (Transition = ’all’).
Finally, it is possible to select which edge points are returned. If Select is set to ’all’, all edge points are returned.
If it is set to ’first’, only the first of the extracted edge points is returned, while it is set to ’last’, only the last one is
returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle or arc in (RowEdge,
ColumnEdge). The corresponding edge amplitudes are returned in Amplitude. In addition, the distance
between consecutive edge points is returned in Distance. Here, Distance[i] corresponds to the distance be-
tween Edge[i] and Edge[i+1], i.e., the tuple Distance contains one element less than the tuples RowEdge and
ColumnEdge.
Attention
measure_pos only returns meaningful results if the assumptions that the edges are straight and perpendicular
to the major axis of the rectangle or arc are fulfilled. Thus, it should not be used to extract edges from curved
objects, for example. Furthermore, the user should ensure that the rectangle or arc is as close to perpendicular as
possible to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1
(for Length1 see gen_measure_rectangle2).
It should be kept in mind that measure_pos ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ Threshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Light/dark or dark/light edge.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of end points.
Default: ’all’
List of values: Select ∈ {’all’, ’first’, ’last’}

HALCON/HDevelop Reference Manual, 2024-11-13


19

. RowEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real


Row coordinate of the center of the edge.
. ColumnEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the edge.
. Amplitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the edge (with sign).
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edges.
Result
If the parameter values are correct the operator measure_pos returns the value 2 (H_MSG_TRUE). Otherwise
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pos
See also
measure_pairs, fuzzy_measure_pairs, fuzzy_measure_pairing
Module
1D Metrology

measure_projection ( Image : : MeasureHandle : GrayValues )

Extract a gray value profile perpendicular to a rectangle or annular arc.


measure_projection extracts a one-dimensional gray value profile perpendicular to a rectangle or annular
arc. This is done by averaging the gray values in “slices” perpendicular to the major axis of the rectangle or
arc. The sampling is done at subpixel positions in the image Image at integer row and column distances (in the
coordinate frame of the rectangle) from the center of the rectangle. Since this involves some calculations which can
be used repeatedly in several projections, the operator gen_measure_rectangle2 is used to perform these
calculations only once, thus increasing the speed of measure_projection significantly. Since there is a trade-
off between accuracy and speed in the subpixel calculations of the gray values, different interpolation schemes can
be selected in gen_measure_rectangle2 (the interpolation only influences rectangles not aligned with the
image axes). The measure object generated with gen_measure_rectangle2 is passed in MeasureHandle.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Attention
It should be kept in mind that measure_projection ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. GrayValues (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real
Gray value profile.

HALCON 24.11.1.0
20 CHAPTER 1 1D MEASURING

Result
If the parameter values are correct the operator measure_projection returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
gray_projections
Module
1D Metrology

measure_thresh ( Image : : MeasureHandle, Sigma, Threshold,


Select : RowThresh, ColumnThresh, Distance )

Extracting points with a particular gray value along a rectangle or an annular arc.
measure_thresh extracts points for which the gray value within an one-dimensional gray value profile is equal
to the specified threshold Threshold. The gray value profile is projected onto the major axis of the measure
rectangle which is passed with the parameter MeasureHandle, so the threshold points calculated within the
gray value profile correspond to certain image coordinates on the rectangle’s major axis. These coordinates are
returned as the operator results in RowThresh and ColumnThresh.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
If the gray value profile intersects the threshold line for several times, the parameter Select determines which
values to return. Possible settings are ’first’, ’last’, ’first_last’ (first and last) or ’all’. For the last two cases
Distance returns the distances between the calculated points.
The gray value profile is created by averaging the gray values along all line segments, which are defined by the
measure rectangle as follows:

1. The segments are perpendicular to the major axis of the rectangle,


2. they have an integer distance to the center of the rectangle,
3. the rectangle bounds the segments.

For every line segment, the average of the gray values of all points with an integer distance to the major axis is
calculated. Due to translation and rotation of the measure rectangle with respect to the image coordinates the input
image Image is in general sampled at subpixel positions.
Since this involves some calculations which can be used repeatedly in several projections, the operator
gen_measure_rectangle2 is used to perform these calculations only once in advance. Here, the measure
object MeasureHandle is generated and different interpolation schemes can be selected.
Attention
measure_thresh only returns meaningful results if the assumptions that the edges are straight and perpendicu-
lar to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_thresh ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.

HALCON/HDevelop Reference Manual, 2024-11-13


21

Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.0, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.0 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Threshold.
Default: 128.0
Value range: 0 ≤ Threshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 1
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of points.
Default: ’all’
List of values: Select ∈ {’all’, ’first’, ’last’, ’first_last’}
. RowThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinates of points with threshold value.
. ColumnThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinates of points with threshold value.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive points.
Result
If the parameter values are correct the operator measure_thresh returns the value 2 (H_MSG_TRUE). Other-
wise, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
gen_measure_rectangle2
Possible Successors
close_measure
Alternatives
measure_pos, edges_sub_pix, measure_pairs
Module
1D Metrology

read_measure ( : : FileName : MeasureHandle )

Read a measure object from a file.


read_measure reads a measure object, which has been written with write_measure from the file
FileName. The default HALCON file extension for a measure object is ’msr’. The values contained in the
read measure object are stored in a measure object with the handle MeasureHandle.

HALCON 24.11.1.0
22 CHAPTER 1 1D MEASURING

For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
File extension: .msr
. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .measure ; handle
Measure object handle.
Result
If the parameters are valid, the operator read_measure returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
measure_pos, measure_pairs
See also
write_measure
Module
1D Metrology

reset_fuzzy_measure ( : : MeasureHandle, SetType : )

Reset a fuzzy function.


reset_fuzzy_measure discards a fuzzy function of the fuzzy set SetType. This function should have been
set by set_fuzzy_measure before.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of the fuzzy set.
Default: ’contrast’
List of values: SetType ∈ {’position’, ’position_pair’, ’size’, ’gray’, ’contrast’}
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• MeasureHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.

HALCON/HDevelop Reference Manual, 2024-11-13


23

Possible Predecessors
set_fuzzy_measure
Possible Successors
fuzzy_measure_pos, fuzzy_measure_pairs
See also
set_fuzzy_measure, set_fuzzy_measure_norm_pair
Module
1D Metrology

serialize_measure ( : : MeasureHandle : SerializedItemHandle )

Serialize a measure object.


serialize_measure serializes the data of a measure object (see fwrite_serialized_item for an in-
troduction of the basic principle of serialization). The same data that is written in a file by write_measure
is converted to a serialized item. The measure object is defined by the handle MeasureHandle. The se-
rialized measure object is returned by the handle SerializedItemHandle and can be deserialized by
deserialize_measure.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_measure returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MeasureHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_measure
See also
read_measure, write_measure
Module
1D Metrology

set_fuzzy_measure ( : : MeasureHandle, SetType, Function : )

Specify a fuzzy function.

HALCON 24.11.1.0
24 CHAPTER 1 1D MEASURING

set_fuzzy_measure specifies a fuzzy function passed in Function. The specified fuzzy functions enable
fuzzy_measure_pos and fuzzy_measure_pairs / fuzzy_measure_pairing to evaluate and select
the detected edge candidates. For this purpose, weighting characteristics for different edge features can be defined
by one function each. Such a specified feature is called fuzzy set. Specifying no function for a fuzzy set means not
to use this feature for the final edge evaluation. Setting a second fuzzy function to a set means to discard the first
defined function and replace it by the second one. A previously defined fuzzy function can be discarded completely
by reset_fuzzy_measure.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Functions for five different fuzzy set types selected by the SetType parameter can be defined, the sub types of a
set being mutual exclusive:

• ’contrast’ will use the fuzzy function to evaluate the amplitudes of the edge candidates. When extracting
edge pairs, the fuzzy evaluation is obtained by the geometric average of the fuzzy contrast scores of both
edges.
• The fuzzy function of ’position’ evaluates the distance of each edge candidate to the reference point of
the measure object, generated by gen_measure_arc or gen_measure_rectangle2. The reference
point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference point to the
middle or the end of the one-dimensional gray value profile instead. If the fuzzy position evaluation depends
on the position of the object along the profile, ’position_first_edge’ / ’position_last_edge’ sets the reference
point at the position of the first/last extracted edge. When extracting edge pairs the position of a pair is
referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the distance of each edge pair to the reference point
of the measure object. The position of a pair is defined by the center point between both
edges. The object’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’posi-
tion_first_pair’, ’position_last_pair’, respectively. Contrary to ’position’, this set is only used by
fuzzy_measure_pairs/fuzzy_measure_pairing.
• ’size’ denotes a fuzzy set that evaluates the normed distance of the two edges of a pair in pixels.
This set is only used by fuzzy_measure_pairs/fuzzy_measure_pairing. Specifying an up-
per bound for the size by terminating the function with a corresponding fuzzy value of 0.0 will speed up
fuzzy_measure_pairs / fuzzy_measure_pairing because not all possible pairs need to be con-
sidered.
• ’gray’ sets a fuzzy function to weight the mean projected gray value between two edges of a pair. This set is
only used by fuzzy_measure_pairs / fuzzy_measure_pairing.

A fuzzy function is defined as a piecewise linear function by at least two pairs of values, sorted in an ascending
order by their x value. The x values represent the edge feature and must lie within the parameter space of the set
type, i.e., in case of ’contrast’ and ’gray’ feature and, e.g., byte images within the range 0.0 ≤ x ≤ 255.0. In
case of ’size’ x has to satisfy 0.0 ≤ x whereas in case of ’position’ x can be any real number. The y values of the
fuzzy function represent the weight of the corresponding feature value and have to satisfy the range of 0.0 ≤ y ≤
1.0. Outside of the function’s interval, defined by the smallest and the greatest x value, the y values of the interval
borders are continued constantly. Such Fuzzy functions can be generated by create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric middle of the weights of
each set.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of the fuzzy set.
Default: ’contrast’
List of values: SetType ∈ {’position’, ’position_center’, ’position_end’, ’position_first_edge’,
’position_last_edge’, ’position_pair_center’, ’position_pair_end’, ’position_first_pair’, ’position_last_pair’,
’size’, ’gray’, ’contrast’}
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d ; real / integer
Fuzzy function.

HALCON/HDevelop Reference Manual, 2024-11-13


25

Example

* how to use a fuzzy function


* ...
gen_measure_rectangle2 (50, 100, 0, 200, 100, 512, 512, 'nearest_neighbor', \
MeasureHandle)
* create a generalized fuzzy function to evaluate edge pairs
* * (30% uncertainty).
create_funct_1d_pairs ([0.7,1.0,1.3], [0.0,1.0,0.0], SizeFunction)
* and transform it to expected size of 13.45 pixels
transform_funct_1d (SizeFunction, [1.0,0.0,13.45,0.0], TransformedFunction)
set_fuzzy_measure (MeasureHandle, 'size', TransformedFunction)

fuzzy_measure_pairs (Image, MeasureHandle, 1, 30, 0.5, 'all', RowEdgeFirst, \


ColumnEdgeFirst, AmplitudeFirst, RowEdgeSecond, \
ColumnEdgeSecond, AmplitudeSecond, RowEdgeCenter, \
ColumnEdgeCenter, FuzzyScore, IntraDistance, \
InterDistance)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MeasureHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs,
transform_funct_1d
Possible Successors
fuzzy_measure_pos, fuzzy_measure_pairs
Alternatives
set_fuzzy_measure_norm_pair
See also
reset_fuzzy_measure
Module
1D Metrology

set_fuzzy_measure_norm_pair ( : : MeasureHandle, PairSize,


SetType, Function : )

Specify a normalized fuzzy function for edge pairs.


set_fuzzy_measure_norm_pair specifies a normalized fuzzy function passed in Function.
The specified fuzzy functions enables fuzzy_measure_pos, fuzzy_measure_pairs and
fuzzy_measure_pairing to evaluate and select the detected candidates of edges and edge pairs. For
this purpose, weighting characteristics for different edge features can be defined by one function each. Such a
specified feature is called fuzzy set. Specifying no function for a fuzzy set means not to use this feature for the
final edge evaluation. Setting a second fuzzy function to a fuzzy set means to discard the first defined function
and replace it by the second one. In difference to set_fuzzy_measure, the abscissa x of these functions
must be defined relative to the desired size s of the edge pairs (passed in PairSize). This enables a generalized

HALCON 24.11.1.0
26 CHAPTER 1 1D MEASURING

usage of the defined functions. A previously defined normalized fuzzy function can be discarded completely by
reset_fuzzy_measure.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Functions for three different fuzzy set types selected by the SetType parameter can be defined, the sub types of
a set being mutual exclusive:

• ’size’ denotes a fuzzy set that valuates the normalized distance of two edges of a pair in pixels:
d
x= (x ≥ 0) .
s
.
Specifying an upper bound xmax for the size by terminating the function with a corresponding fuzzy value
of 0.0 will speed up fuzzy_measure_pairs / fuzzy_measure_pairing because not all possible
pairs must be considered. Additionally, this fuzzy set can also be specified as a normalized size difference by
’size_diff’
s−d
x= (x ≤ 1)
s
and a absolute normalized size difference by ’size_abs_diff’
|s − d|
x= (0 ≤ x ≤ 1) .
s
.
• The fuzzy function of ’position’ evaluates the signed distance p of each edge candidate to the reference point
of the measure object, generated by gen_measure_arc or gen_measure_rectangle2:
p
x= .
s
.
The reference point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference
point to the middle or the end of the one-dimensional gray value profile, instead. If the fuzzy position
valuation depends on the position of the object along the profile ’position_first_edge’ / ’position_last_edge’
sets the reference point at the position of the first/last extracted edge. When extracting edge pairs, the position
of a pair is referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the signed distance of each edge pair to the refer-
ence point of the measure object. The position of a pair is defined by the center point between
both edges. The object’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’po-
sition_first_pair’, ’position_last_pair’, respectively. Contrary to ’position’, this set is only used by
fuzzy_measure_pairs/fuzzy_measure_pairing.

A normalized fuzzy function is defined as a piecewise linear function by at least two pairs of values, sorted in
an ascending order by their x value. The y values of the fuzzy function represent the weight of the corresponding
feature value and must satisfy the range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined by the smallest
and the greatest x value, the y values of the interval borders are continued constantly. Such Fuzzy functions can be
generated by create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric mean of the weights of
each set.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. PairSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Favored width of edge pairs.
Default: 10.0
Suggested values: PairSize ∈ {4.0, 6.0, 8.0, 10.0, 15.0, 20.0, 30.0}
Value range: 0.0 ≤ PairSize
Minimum increment: 0.1
Recommended increment: 1.0

HALCON/HDevelop Reference Manual, 2024-11-13


27

. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string


Selection of the fuzzy set.
Default: ’size_abs_diff’
List of values: SetType ∈ {’size’, ’size_diff’, ’size_abs_diff’, ’position’, ’position_center’, ’position_end’,
’position_first_edge’, ’position_last_edge’, ’position_pair_center’, ’position_pair_end’, ’position_first_pair’,
’position_last_pair’}
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d ; real / integer
Fuzzy function.
Example

* how to use a fuzzy function


* ...
gen_measure_rectangle2 (50, 100, 0, 200, 100, 512, 512, 'nearest_neighbor', \
MeasureHandle)
* create a generalized fuzzy function to evaluate edge pairs
* * (30% uncertainty).
create_funct_1d_pairs ([0.7,1.0,1.3], [0.0,1.0,0.0], SizeFunction)
* and set it for an expected pair size of 13.45 pixels
set_fuzzy_measure_norm_pair (MeasureHandle, 13.45, 'size', SizeFunction)

fuzzy_measure_pairs (Image, MeasureHandle, 1, 30, 0.5, 'all', RowEdgeFirst, \


ColumnEdgeFirst, AmplitudeFirst, RowEdgeSecond, \
ColumnEdgeSecond, AmplitudeSecond, RowEdgeCenter, \
ColumnEdgeCenter, FuzzyScore, IntraDistance, \
InterDistance)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MeasureHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
gen_measure_arc, gen_measure_rectangle2, create_funct_1d_pairs
Possible Successors
fuzzy_measure_pairs, fuzzy_measure_pairing
Alternatives
transform_funct_1d, set_fuzzy_measure
See also
reset_fuzzy_measure
Module
1D Metrology

translate_measure ( : : MeasureHandle, Row, Column : )

Translate a measure object.


translate_measure translates the reference point of the measure object given by MeasureHandle to the
point (Row, Column). If the measure object and the translated measure object lie completely within the image,

HALCON 24.11.1.0
28 CHAPTER 1 1D MEASURING

the measure object is shifted to the new reference point in an efficient manner. Otherwise, the measure object
is generated anew with gen_measure_rectangle2 or gen_measure_arc using the parameters that were
specified when the measure object was created and the new reference point.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Row coordinate of the new reference point.
Default: 50.0
Suggested values: Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Row ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Column coordinate of the new reference point.
Default: 100.0
Suggested values: Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Column ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
Result
If the parameter values are correct the operator translate_measure returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MeasureHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc
Possible Successors
measure_pos, measure_pairs, fuzzy_measure_pos, fuzzy_measure_pairs,
fuzzy_measure_pairing, measure_thresh
Alternatives
gen_measure_rectangle2, gen_measure_arc
See also
close_measure
Module
1D Metrology

write_measure ( : : MeasureHandle, FileName : )

Write a measure object to a file.


write_measure writes a measure object that has been created by, e.g., gen_measure_rectangle2 to the
file FileName. The measure object is defined by the handle MeasureHandle. The measure object can be read
with read_measure. The default HALCON file extension for a measure object is ’msr’.

HALCON/HDevelop Reference Manual, 2024-11-13


29

For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .msr
Result
If the parameters are valid, the operator write_measure returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc
See also
read_measure
Module
1D Metrology

HALCON 24.11.1.0
30 CHAPTER 1 1D MEASURING

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 2

2D Metrology

This chapter contains operators for 2D metrology.


Concept of 2D Metrology
With 2D metrology, you can measure the dimensions of objects that can be represented by specific geometric
primitives. The geometric shapes that can be measured comprise circles, ellipses, rectangles, and lines. You need
approximate values for the positions, orientations, and dimensions of the objects to measure. Then, the real edge
positions of the objects in the image are located near the boundaries of the approximate objects. With these edge
positions, the parameters of the geometric shapes are optimized to better fit to the image data and are returned as
measurement result.
The approximate values for the shape parameters of an object as well as some parameters that control the mea-
surement are stored in a data structure that is called metrology object. The edges of the object in the image are
located within so-called measure regions. These are rectangular regions that are arranged perpendicular to the
boundaries of the metrology objects. Parameters that adjust the dimension and distribution of the measure regions
are specified together with the approximate shape parameters for each metrology object. When the measurement
is applied, the edge positions inside all measure regions are determined and fitted to geometric shapes using a
RANSAC algorithm. All metrology objects, all further information that is necessary for the measurement, and the
measurement results are stored in a data structure that is called metrology model.

(1) (2)
The geometric shapes in (1) are measured using 2D Metrology (2): A metrology model with 4 metrology objects
(blue contours) is created. Using the edge positions (cyan crosses) located within the measure regions (gray
rectangles) for each metrology object, the geometric shapes (green contours) are fitted and their parameters can
be queried. As shown for the circles, more than one instance per object can be found. This image is from the
example program apply_metrology_model.hdev.

31
32 CHAPTER 2 2D METROLOGY

In the following, the steps that are required to use 2D metrology are described briefly.

Create the metrology model and specify the image size: First, a metrology model must be created using

• create_metrology_model.

The metrology model is used as a container for one or more metrology objects. For an efficient measurement,
after creating the metrology model, the image size of the image in which the measurements will be performed
should be specified using

• set_metrology_model_image_size.

Provide approximate values: Then, metrology objects are added to the metrology model. Each metrology object
consists of the approximate shape parameters for the corresponding object in the image and of the parameters
that control the measurement. The parameters that control the measurement comprise, e.g., parameters that
specify the dimension and distribution of the measure regions. Furthermore, several generic parameters can
be adjusted for each metrology object. The metrology objects are specified with

• add_metrology_object_circle_measure for circles,


• add_metrology_object_ellipse_measure for ellipses,
• add_metrology_object_rectangle2_measure for rectangles, and
• add_metrology_object_line_measure for lines.
• add_metrology_object_generic allows to create metrology objects of different shapes (e.g.,
ellipse, circle, etc.) using one operator.

To visually inspect the defined metrology objects, you can access their XLD contours with the operator
get_metrology_object_model_contour. To visually inspect the created measure regions, you
can access their XLD contours with the operator get_metrology_object_measures.
Modify the model parameters: If a camera calibration has been performed, the camera parameters and the pose
of the measurement plane can be set with

• set_metrology_model_param.

Then, the result of the measurements returned by get_metrology_object_result will be in world


coordinates. The reference coordinate system in which the metrology objects are defined can also be changed
with set_metrology_model_param.
Modify object parameters: Many parameters can be set when adding the metrology objects to the metrology
model. Some of them can also be modified afterwards using the operator

• set_metrology_object_param.

Align the metrology model: To translate and rotate the metrology model before the next measurement is per-
formed, you can use the operator

• align_metrology_model.

An alignment is temporary and is replaced by the next alignment. The metrology model itself is not changed.
Note that typically the alignment parameters are obtained using shape-based matching.
Apply the measurement: The actual measurement in the image is performed with

• apply_metrology_model.

The operator locates the edges within the measure regions and fits the specified geometric shape to the edge
positions using a RANSAC algorithm. The edges are located internally using the operator measure_pos
or fuzzy_measure_pos (see also chapter 1D Measuring). The latter uses fuzzy methods and is used only
if at least one fuzzy function was set via set_metrology_object_fuzzy_param before applying the
measurement. If more than one instance of the returned object shape is needed (compare image above),
the generic parameter ’num_instances’ must be set to the number of instances that should be returned.
The parameter can be set when adding the individual metrology objects or afterwards with the operator
set_metrology_object_param.

HALCON/HDevelop Reference Manual, 2024-11-13


33

Access the results: After the measurement, the results can be accessed. The parameters of the adapted geometric
shapes of the objects are queried with the operator

• get_metrology_object_result.

Querying only the edges used for the returned result and their amplitudes is also done using
get_metrology_object_result.
The row and column coordinates of all located edges can be accessed with

• get_metrology_object_measures.

To visualize the adapted geometric shapes, you can access their XLD contours with

• get_metrology_object_result_contour.

Further operators
In addition to the operators mentioned above, you can copy the metrology handle with
copy_metrology_model, write the metrology model to file with write_metrology_model, read
a model from file again using read_metrology_model, and serialize or deserialize a metrology model using
serialize_metrology_model or deserialize_metrology_model.
Furthermore, you can query various information from the metrology model. For example, you can query the indices
of the metrology objects with get_metrology_object_indices, query parameters that are valid for the
entire metrology model with get_metrology_model_param, query a fuzzy parameter of a metrology model
with get_metrology_object_fuzzy_param, query the number of instances of the metrology objects of a
metrology model with get_metrology_object_num_instances, and query the current configuration of
the metrology model with get_metrology_object_param.
Additionally, you can reset all parameters of a metrology model using reset_metrology_object_param
or reset only all fuzzy parameters and fuzzy functions of a metrology model using
reset_metrology_object_fuzzy_param.
Glossary
In the following, the most important terms that are used in the context of 2D Metrology are described.

metrology model Data structure that contains all metrology objects, all information needed for the measurement,
and the measurement results.
metrology object Data structure for the object to be measured with 2D metrology. The metrology object is repre-
sented by a specific geometric shape for which the shape parameters are approximately known. Additionally,
it contains parameters that control the measurement, e.g., parameters that specify the dimension and distri-
bution of the measure regions.
measure regions Rectangular regions that are arranged perpendicular to the boundaries of the approximate ob-
jects. Within these regions the edges that are used to get the exact shape parameters of the metrology objects
are extracted.
returned instance of a metrology object For each metrology object, different instances of the object can be re-
turned by the measurement, e.g., if parallel structures of the same shape exist near to the boundaries of the
approximated geometric shape (see image above). The sequence of the returned instances is arbitrary, i.e., it
is no measure for the quality of the fitting.

Further Information
See also the “Solution Guide on 2D Measuring” for further details about 2D metrology.

add_metrology_object_circle_measure ( : : MetrologyHandle, Row,


Column, Radius, MeasureLength1, MeasureLength2, MeasureSigma,
MeasureThreshold, GenParamName, GenParamValue : Index )

Add a circle or a circular arc to a metrology model.

HALCON 24.11.1.0
34 CHAPTER 2 2D METROLOGY

add_metrology_object_circle_measure adds a metrology object of type circle or circular arc to


a metrology model and prepares the rectangular measure regions. The handle of the model is passed in
MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The geometric shape of the metrology object of type circle is specified by its center (Row, Column) and Radius.
The rectangular measure regions lie perpendicular to the boundary of the circle. The half edge lengths of the
measure regions are set in MeasureLength1 and MeasureLength2. The centers of the measure regions lie
on the boundary of the circle. The parameter MeasureSigma specifies the standard deviation that is used by
operator apply_metrology_model to smooth the gray values of the image. Salient edges can be selected
with the parameter MeasureThreshold, which constitutes a threshold on the amplitude, i.e., the absolute value
of the first derivative of the edge. The operator add_metrology_object_circle_measure returns the
index of the added metrology object in parameter Index.
Furthermore, you can adjust some generic parameters with GenParamName and GenParamValue. The fol-
lowing values for GenParamName and GenParamValue are available:

’start_phi’: The parameter specifies the angle at the start point of a circular arc. To create a closed circle the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 0.0
’end_phi’: The parameter specifies the angle at the end point of a circular arc. To create a closed circle the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped internally automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 6.28318
’point_order’: The parameter specifies the direction of the circular arc. For the value ’positive’, the circular arc
is defined between ’start_phi’ and ’end_phi’ in mathematically positive direction (counterclockwise). For
the value ’negative’, the circular arc is defined between ’start_phi’ and ’end_phi’ in mathematically negative
direction (clockwise).
List of values: ’positive’, ’negative’
Default: ’positive’

Additionally all generic parameters, that are available for the operator set_metrology_object_param can
be set. But note that for a lot of applications the default values are sufficient and no adjustment is necessary.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; real / integer
Row coordinate (or Y) of the center of the circle or circular arc.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; real / integer
Column (or X) coordinate of the center of the circle or circular arc.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; real / integer
Radius of the circle or circular arc.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1
Minimum increment: 1.0
Recommended increment: 10.0
Restriction: MeasureLength1 < Radius

HALCON/HDevelop Reference Manual, 2024-11-13


35

. MeasureLength2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer


Half length of the measure regions tangential to the boundary.
Default: 5.0
Suggested values: MeasureLength2 ∈ {3.0, 5.0, 10.0}
Value range: 1.0 ≤ MeasureLength2
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Sigma of the Gaussian function for the smoothing.
Default: 1.0
Suggested values: MeasureSigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ MeasureSigma ≤ 100.0
Minimum increment: 0.01
Recommended increment: 0.1
. MeasureThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum edge amplitude.
Default: 30.0
Suggested values: MeasureThreshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ MeasureThreshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’distance_threshold’, ’end_phi’, ’instances_outside_measure_regions’,
’max_num_iterations’, ’measure_distance’, ’measure_interpolation’, ’measure_select’, ’measure_transition’,
’min_score’, ’num_instances’, ’num_measures’, ’point_order’, ’rand_seed’, ’start_phi’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3, 4, 5, 10, 20, ’all’, ’true’, ’false’, ’first’, ’last’, ’positive’,
’negative’, ’uniform’, ’nearest_neighbor’, ’bilinear’, ’bicubic’}
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Index of the created metrology object.
Example

read_image (Image, 'rings_and_nuts')


create_metrology_model (MetrologyHandle)
get_image_size (Image, Width, Height)
set_metrology_model_image_size (MetrologyHandle, Width, Height)
add_metrology_object_circle_measure (MetrologyHandle, 120, 130, 35, 10, 2, \
1, 30, ['measure_distance'], [40], Index)
apply_metrology_model (Image, MetrologyHandle)
get_metrology_object_result (MetrologyHandle, Index, 'all', 'result_type', \
'all_param', Circle)
get_metrology_object_result_contour (Contour, MetrologyHandle, Index, \
'all', 1.5)

Result
If the parameters are valid, the operator add_metrology_object_circle_measure returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
36 CHAPTER 2 2D METROLOGY

This operator modifies the state of the following input parameter:


• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_model_image_size
Possible Successors
align_metrology_model, apply_metrology_model
Alternatives
add_metrology_object_generic
See also
get_metrology_object_model_contour, set_metrology_model_param,
add_metrology_object_ellipse_measure, add_metrology_object_line_measure,
add_metrology_object_rectangle2_measure
Module
2D Metrology

add_metrology_object_ellipse_measure ( : : MetrologyHandle, Row,


Column, Phi, Radius1, Radius2, MeasureLength1, MeasureLength2,
MeasureSigma, MeasureThreshold, GenParamName,
GenParamValue : Index )

Add an ellipse or an elliptic arc to a metrology model.


add_metrology_object_ellipse_measure adds a metrology object of type ellipse or elliptic arc to
a metrology model and prepares the rectangular measure regions. The handle of the model is passed in
MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The geometric shape of the metrology object of type ellipse is specified by its center (Row, Column), the orien-
tation of the main axis Phi, the length of the larger half axis Radius1, and the length of the smaller half axis
Radius2. The input value for Phi is mapped automatically to the interval ] − π, π]. The rectangular measure
regions lie perpendicular to the boundary of the ellipse. The half edge lengths of the measure regions perpen-
dicular and tangential to the boundary of the ellipse are set in MeasureLength1 and MeasureLength2.
The centers of the measure regions lie on the boundary of the geometric shape. The parameter MeasureSigma
specifies the standard deviation that is used by the operator apply_metrology_model to smooth the gray
values of the image. Salient edges can be selected with the parameter MeasureThreshold, which con-
stitutes a threshold on the amplitude, i.e., the absolute value of the first derivative of the edge. The operator
add_metrology_object_ellipse_measure returns the index of the added metrology object in the pa-
rameter Index.
Furthermore, you can adjust some generic parameters within GenParamName and GenParamValue. The
following values for GenParamName and GenParamValue are available:

’start_phi’: The parameter specifies the angle at the start point of an elliptic arc. The angle at the start point is
measured relative to the positive main axis specified with Phi and corresponds to the smallest surrounding
circle of the ellipse. The actual start point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. The angle refers to the coordinate system of
the ellipse, i.e., it is specified relative to the main axis and in a mathematical positive direction. Thus, the two
main poles correspond to the angles 0 and π, the two minor poles to the angle π/2 and 3π/2. To create a
closed ellipse the value of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set
to 2π (with positive point order). The input value is mapped internally automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 0.0
’end_phi’: The parameter specifies the angle at the end point of an elliptic arc. The angle at the end point are
measured relative to the positive main axis specified with Phi and corresponds to the smallest surrounding

HALCON/HDevelop Reference Manual, 2024-11-13


37

circle of the ellipse. The actual end point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. The angle refers to the coordinate system of
the ellipse, i.e., it is specified relative to the main axis and in a mathematical positive direction. Thus, the two
main poles correspond to the angles 0 and π, the two minor poles to the angle π/2 and 3π/2. To create a
closed ellipse the value of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set
to 2π (with positive point order). The input value is mapped automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 6.28318
’point_order’: The parameter specifies the direction of the elliptic arc. For the value ’positive’, the elliptic arc
is defined between ’start_phi’ and ’end_phi’ in mathematically positive direction (counterclockwise). For
the value ’negative’, the elliptic arc is defined between ’start_phi’ and ’end_phi’ in mathematically negative
direction (clockwise).
List of values: ’positive’, ’negative’
Default: ’positive’

Additionally, all generic parameters that are available for the operator set_metrology_object_param can
be set. But note that for a lot of applications the default values are sufficient and no adjustment is necessary.
Parameters

. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle


Handle of the metrology model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.y(-array) ; real / integer
Row (or Y) coordinate of the center of the ellipse.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.center.x(-array) ; real / integer
Column (or X) coordinate of the center of the ellipse.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.angle.rad(-array) ; real / integer
Orientation of the main axis [rad].
. Radius1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius1(-array) ; real / integer
Length of the larger half axis.
. Radius2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ellipse.radius2(-array) ; real / integer
Length of the smaller half axis.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1
Minimum increment: 1.0
Recommended increment: 10.0
Restriction: MeasureLength1 < Radius1 && MeasureLength1 < Radius2
. MeasureLength2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions tangential to the boundary.
Default: 5.0
Suggested values: MeasureLength2 ∈ {3.0, 5.0, 10.0}
Value range: 1.0 ≤ MeasureLength2
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Sigma of the Gaussian function for the smoothing.
Default: 1.0
Suggested values: MeasureSigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ MeasureSigma ≤ 100.0
Minimum increment: 0.01
Recommended increment: 0.1

HALCON 24.11.1.0
38 CHAPTER 2 2D METROLOGY

. MeasureThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer


Minimum edge amplitude.
Default: 30.0
Suggested values: MeasureThreshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ MeasureThreshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’distance_threshold’, ’end_phi’, ’instances_outside_measure_regions’,
’max_num_iterations’, ’measure_distance’, ’measure_interpolation’, ’measure_select’, ’measure_transition’,
’min_score’, ’num_instances’, ’num_measures’, ’point_order’, ’rand_seed’, ’start_phi’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3, 4, 5, 10, 20, ’all’, ’true’, ’false’, ’first’, ’last’, ’positive’,
’negative’, ’uniform’, ’nearest_neighbor’, ’bilinear’, ’bicubic’}
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Index of the created metrology object.
Result
If the parameters are valid, the operator add_metrology_object_ellipse_measure returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_model_image_size
Possible Successors
align_metrology_model, apply_metrology_model
Alternatives
add_metrology_object_generic
See also
get_metrology_object_model_contour, set_metrology_model_param,
add_metrology_object_circle_measure, add_metrology_object_line_measure,
add_metrology_object_rectangle2_measure
Module
2D Metrology

add_metrology_object_generic ( : : MetrologyHandle, Shape,


ShapeParam, MeasureLength1, MeasureLength2, MeasureSigma,
MeasureThreshold, GenParamName, GenParamValue : Index )

Add a metrology object to a metrology model.


add_metrology_object_generic adds a metrology object of type Shape to a metrology model and pre-
pares the rectangular measure regions.

HALCON/HDevelop Reference Manual, 2024-11-13


39

For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The handle of the model is passed in MetrologyHandle.
Shape specifies which type of object is added to the metrology model. The operator
add_metrology_object_generic returns the index of the added metrology object in the parame-
ter Index. Note that add_metrology_object_generic provides the functionality of the operators
add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_rectangle2_measure and add_metrology_object_line_measure
in one operator.
Possible shapes
Depending on the object specified in Shape the following values are expected:
’circle’: The geometric shape of the metrology object of type circle is specified by its center (Row, Column) and
radius.
ShapeParam=[Row, Column, Radius]
’rectangle2’: The geometric shape of the metrology object of type rectangle is specified by its center (Row, Col-
umn), the orientation of the main axis Phi, and the half edge lengths Length1 and Length2. The input value
for Phi is mapped automatically to the interval ] − π, π].
ShapeParam=[Row, Column, Phi, Length1, Length2]
’ellipse’: The geometric shape of the metrology object of type ellipse is specified by its center (Row, Column),
the orientation of the main axis Phi, the length of the larger half axis Radius1, and the length of the smaller
half axis Radius2. The input value for Phi is mapped automatically to the interval ] − π, π].
ShapeParam=[Row, Column, Phi, Radius1, Radius2]
’line’: The geometric shape of the metrology object of type line is described by the coordinates of its start point
(RowBegin, ColumnBegin) and the coordinates of its end point (RowEnd, ColumnEnd).
ShapeParam=[RowBegin, ColumnBegin, RowEnd, ColumnEnd]
Definition of measure regions
add_metrology_object_generic also prepares the rectangular measure regions. The rectangular measure
regions lie perpendicular to the boundary of the object. The half edge lengths of the measure regions perpendicular
and tangential to the boundary of the object are set in MeasureLength1 and MeasureLength2. The centers
of the measure regions lie on the boundary of the object. The parameter MeasureSigma specifies a standard
deviation that is used by the operator apply_metrology_model to smooth the gray values of the image.
Salient edges can be selected with the parameter MeasureThreshold, which constitutes a threshold on the
amplitude, i.e., the absolute value of the first derivative of the edge.
Generic parameters
Generic parameters and their values can be specified using GenParamName and GenParamValue. All
generic parameters that are available in the operator set_metrology_object_param can also be set in
add_metrology_object_generic. But note that for a lot of applications the default values are sufficient
and no adjustment is necessary. Furthermore, the following values for GenParamName and GenParamValue
are available only for Shape = ’circle’ and ’ellipse’:
’start_phi’: The parameter specifies the angle at the start point of a circular or elliptic arc. For an ellipse, the angle
at the start point is measured relative to the positive main axis and corresponds to the smallest surrounding
circle of the ellipse. The actual start point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. To create a closed circle or ellipse the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 0.0
’end_phi’: The parameter specifies the angle at the end point of a circular or elliptic arc. For an ellipse, the angle
at the end point is measured relative to the positive main axis and corresponds to the smallest surrounding
circle of the ellipse. The actual end point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. To create a closed circle or ellipse the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped internally automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 6.28318

HALCON 24.11.1.0
40 CHAPTER 2 2D METROLOGY

’point_order’: The parameter specifies the direction of the circular or elliptic arc. For the value ’positive’, the arc
is defined between ’start_phi’ and ’end_phi’ in mathematically positive direction (counterclockwise). For the
value ’negative’, the arc is defined between ’start_phi’ and ’end_phi’ in mathematically negative direction
(clockwise).
List of values: ’positive’, ’negative’
Default: ’positive’

Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Type of the metrology object to be added.
Default: ’circle’
List of values: Shape ∈ {’circle’, ’ellipse’, ’rectangle2’, ’line’}
. ShapeParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer
Parameters of the metrology object to be added.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1 ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureLength2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions tangential to the boundary.
Default: 5.0
Suggested values: MeasureLength2 ∈ {3.0, 5.0, 10.0}
Value range: 1.0 ≤ MeasureLength2 ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Sigma of the Gaussian function for the smoothing.
Default: 1.0
Suggested values: MeasureSigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ MeasureSigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. MeasureThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum edge amplitude.
Default: 30.0
Suggested values: MeasureThreshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ MeasureThreshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’distance_threshold’, ’end_phi’, ’instances_outside_measure_regions’,
’max_num_iterations’, ’measure_distance’, ’measure_interpolation’, ’measure_select’, ’measure_transition’,
’min_score’, ’num_instances’, ’num_measures’, ’point_order’, ’rand_seed’, ’start_phi’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3, 4, 5, 10, 20, ’all’, ’true’, ’false’, ’first’, ’last’, ’positive’,
’negative’, ’uniform’, ’nearest_neighbor’, ’bilinear’, ’bicubic’}
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Index of the created metrology object.

HALCON/HDevelop Reference Manual, 2024-11-13


41

Example

create_metrology_model (MetrologyHandle)
read_image (Image, 'fabrik')
get_image_size (Image, Width, Height)
set_metrology_model_image_size (MetrologyHandle, Width, Height)
LinePar := [45,360,415,360]
RectPar1 := [270,232,rad(0),30,25]
RectPar2 := [360,230,rad(0),30,25]
LinePar := [45,360,415,360]
RectPar3 := [245,320,rad(-90),70,35]
* Add two rectangles
add_metrology_object_generic (MetrologyHandle, 'rectangle2', \
[RectPar1,RectPar2], 20, 5, 1, 30, [], [], \
Indices)
* Add a rectangle and a line
add_metrology_object_generic (MetrologyHandle, ['rectangle2','line'], \
[RectPar3,LinePar], 20, 5, 1, 30, [], [], \
Index)
get_metrology_object_model_contour (Contour, MetrologyHandle, 'all', 1.5)
apply_metrology_model (Image, MetrologyHandle)
get_metrology_object_result_contour (Contour1, MetrologyHandle, 'all', \
'all', 1.5)

Result
If the parameters are valid, the operator add_metrology_object_generic returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_model_image_size, set_metrology_model_param
Possible Successors
align_metrology_model, apply_metrology_model, set_metrology_model_param
See also
get_metrology_object_model_contour
Module
2D Metrology

add_metrology_object_line_measure ( : : MetrologyHandle,
RowBegin, ColumnBegin, RowEnd, ColumnEnd, MeasureLength1,
MeasureLength2, MeasureSigma, MeasureThreshold, GenParamName,
GenParamValue : Index )

Add a line to a metrology model.

HALCON 24.11.1.0
42 CHAPTER 2 2D METROLOGY

add_metrology_object_line_measure adds a metrology object of type line to a metrology model and


prepares the rectangular measure regions. The handle of the model is passed in MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The geometric shape of the metrology object of type line is described by the coordinates of the start point
(RowBegin, ColumnBegin) and the coordinates of the end point (RowEnd, ColumnEnd). The rectangular
measure regions lie perpendicular to the line. The half edge lengths of the measure regions perpendicular and
tangential to the line are set in MeasureLength1 and MeasureLength2. The centers of the measure re-
gions lie on the line. The parameter MeasureSigma specifies a standard deviation that is used by the operator
apply_metrology_model to smooth the gray values of the image. Salient edges can be selected with the
parameter MeasureThreshold, which constitutes a threshold on the amplitude, i.e., the absolute value of the
first derivative of the edge.
Furthermore, you can adjust some generic parameters within GenParamName and GenParamValue. In par-
ticular, all generic parameters that are available in the operator set_metrology_object_param can be set.
But note that for a lot of applications the default values are sufficient and no adjustment is necessary.
The operator add_metrology_object_line_measure returns the index of the added metrology object in
the parameter Index.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. RowBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.y(-array) ; real / integer
Row (or Y) coordinate of the start of the line.
. ColumnBegin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.begin.x(-array) ; real / integer
Column (or X) coordinate of the start of the line.
. RowEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.y(-array) ; real / integer
Row (or Y) coordinate of the end of the line.
. ColumnEnd (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . line.end.x(-array) ; real / integer
Column (or X) coordinate of the end of the line.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureLength2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions tangential to the boundary.
Default: 5.0
Suggested values: MeasureLength2 ∈ {3.0, 5.0, 10.0}
Value range: 1.0 ≤ MeasureLength2
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Sigma of the Gaussian function for the smoothing.
Default: 1.0
Suggested values: MeasureSigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ MeasureSigma ≤ 100.0
Minimum increment: 0.01
Recommended increment: 0.1
. MeasureThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum edge amplitude.
Default: 30.0
Suggested values: MeasureThreshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ MeasureThreshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2

HALCON/HDevelop Reference Manual, 2024-11-13


43

. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string


Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’distance_threshold’, ’instances_outside_measure_regions’,
’max_num_iterations’, ’measure_distance’, ’measure_interpolation’, ’measure_select’, ’measure_transition’,
’min_score’, ’num_instances’, ’num_measures’, ’rand_seed’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3, 4, 5, 10, 20, ’all’, ’true’, ’false’, ’first’, ’last’, ’positive’,
’negative’, ’uniform’, ’nearest_neighbor’, ’bilinear’, ’bicubic’}
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Index of the created metrology object.
Result
If the parameters are valid, the operator add_metrology_object_line_measure returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_model_image_size
Possible Successors
align_metrology_model, apply_metrology_model
Alternatives
add_metrology_object_generic
See also
get_metrology_object_model_contour, set_metrology_model_param,
add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_rectangle2_measure
Module
2D Metrology

add_metrology_object_rectangle2_measure ( : : MetrologyHandle,
Row, Column, Phi, Length1, Length2, MeasureLength1,
MeasureLength2, MeasureSigma, MeasureThreshold, GenParamName,
GenParamValue : Index )

Add a rectangle to a metrology model.


add_metrology_object_rectangle2_measure adds a metrology object of type rectangle to a
metrology model and prepares the rectangular measure regions. The handle of the model is passed in
MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The geometric shape of the metrology object of type rectangle is specified by its center (Row, Column), the
orientation of the main axis Phi, and the half edge lengths Length1 and Length2. The input value for Phi is
mapped automatically to the interval ] − π, π]. The rectangular measure regions lie perpendicular to the boundary

HALCON 24.11.1.0
44 CHAPTER 2 2D METROLOGY

of the rectangle. The half edge lengths of the measure regions perpendicular and tangential to the boundary of
the rectangle are set in MeasureLength1 and MeasureLength2. The centers of the measure regions lie on
the boundary of the rectangle. The parameter MeasureSigma specifies a standard deviation that is used by the
operator apply_metrology_model to smooth the gray values of the image. Salient edges can be selected
with the parameter MeasureThreshold, which constitutes a threshold on the amplitude, i.e., the absolute value
of the first derivative of the edge.
Furthermore, you can adjust some generic parameters within GenParamName and GenParamValue. In par-
ticular, all generic parameters that are available in the operator set_metrology_object_param can be set.
But note that for a lot of applications the default values are sufficient and no adjustment is necessary.
The operator add_metrology_object_rectangle2_measure returns the index of the added metrology
object within the metrology model in the parameter Index.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; real / integer
Row (or Y) coordinate of the center of the rectangle.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; real / integer
Column (or X) coordinate of the center of the rectangle.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; real / integer
Orientation of the main axis [rad].
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; real / integer
Length of the larger half edge of the rectangle.
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; real / integer
Length of the smaller half edge of the rectangle.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1
Minimum increment: 1.0
Recommended increment: 10.0
Restriction: MeasureLength1 < Length1 && MeasureLength1 < Length2
. MeasureLength2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions tangential to the boundary.
Default: 5.0
Suggested values: MeasureLength2 ∈ {3.0, 5.0, 10.0}
Value range: 1.0 ≤ MeasureLength2
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Sigma of the Gaussian function for the smoothing.
Default: 1.0
Suggested values: MeasureSigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ MeasureSigma ≤ 100.0
Minimum increment: 0.01
Recommended increment: 0.1
. MeasureThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum edge amplitude.
Default: 30.0
Suggested values: MeasureThreshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ MeasureThreshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’distance_threshold’, ’instances_outside_measure_regions’,

HALCON/HDevelop Reference Manual, 2024-11-13


45

’max_num_iterations’, ’measure_distance’, ’measure_interpolation’, ’measure_select’, ’measure_transition’,


’min_score’, ’num_instances’, ’num_measures’, ’rand_seed’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3, 4, 5, 10, 20, ’all’, ’true’, ’false’, ’first’, ’last’, ’positive’,
’negative’, ’uniform’, ’nearest_neighbor’, ’bilinear’, ’bicubic’}
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Index of the created metrology object.
Result
If the parameters are valid, the operator add_metrology_object_rectangle2_measure returns the
value 2 (H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MetrologyHandle

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_model_image_size
Possible Successors
align_metrology_model, apply_metrology_model
Alternatives
add_metrology_object_generic
See also
get_metrology_object_model_contour, set_metrology_model_param,
add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_line_measure
Module
2D Metrology

align_metrology_model ( : : MetrologyHandle, Row, Column,


Angle : )

Alignment of a metrology model.


align_metrology_model moves and rotates the whole metrology model MetrologyHandle relative to
the image coordinate system which has its origin in the top left corner.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
An alignment ensures, that the position and orientation of the metrology model is adapted to the objects to be
measured in the current image. The alignment is then used by apply_metrology_model to perform the
measurement. First the metrology model is rotated by Angle, then the metrology model is translated by Row and
Column. The values of the alignment are overwritten by the next call of align_metrology_model.
Computation of the parameters of the alignment
The parameters of the alignment can be determined using diverse methods. Here, three possibilities to determine
the parameters are listed:

HALCON 24.11.1.0
46 CHAPTER 2 2D METROLOGY

Using region analysis:


If the metrology model can be extracted using region processing and if the pose of the Region changes
only slightly in subsequent images, the parameters of the reference system of the metrology model
and of the alignment can be derived using region analysis. In the following picture threshold and
smallest_rectangle2 were used to obtain these parameter.

The region extracted with threshold is shown in red. The rectangle computed with
smallest_rectangle2 is shown in green.
1. Setting the reference system
In the image in which the metrology model was defined, extract a region containing the metrology
objects. The pose of this region with respect to the image coordinate system is determined and set as the
reference system of the metrology model using set_metrology_model_param. This step is only
performed once when setting up the metrology model.
Example:
threshold(Image, Region, 0, 50)
smallest_rectangle2(Region, RowOrig, ColumnOrig, AngleOrig, Length1,
Length2)
set_metrology_model_param(MetrologyHandle, ’reference_system’,
[RowOrig, ColumnOrig, AngleOrig])
2. Determining the alignment
In an image where the metrology model occurs in a different Pose, the current pose if the extracted
region is determined. This pose is then used to align the metrology model.
Example:
threshold(CurrentImage, Region, 0, 50)
smallest_rectangle2(Region, RowAlign, ColumnAlign, AngleAlign,
Length1, Length2)
align_metrology_model(MetrologyHandle, RowAlign, ColumnAlign,
AngleAlign)
Using a shape model:
If a shape model is used to align the metrology model, the reference system with respect to which the metrol-
ogy objects are given has to be set so that it coincides with the coordinate system used by the shape model.
Only then, the results (’row’, ’column’, ’angle’) of get_generic_shape_model_result can be used
directly in align_metrology_model to align the metrology model in the current image. The individual
steps that are needed, are shown below.

(1) (2) (3)


(1) The contours of the metrology model (2) The contours of the shape model (3) The contours of the
metrology model after setting the correct reference system.
1. Setting the reference system
In the image in which the metrology model was defined, the pose of the origin of the
shape model is determined and set as the reference system of the metrology model using
set_metrology_model_param. This step is only performed once when setting up the metrology
model.
Example:
train_generic_shape_model(Image, ModelID)
area_center(Image, Area, RowOrig, ColumnOrig)
set_metrology_model_param(MetrologyHandle, ’reference_system’,
[RowOrig,ColumnOrig,0])

HALCON/HDevelop Reference Manual, 2024-11-13


47

2. Determining the alignment


In an image, in which the object to be measured occurs in a different pose, the current pose of the shape
model is determined and set in the metrology model using align_metrology_model.
Example:
find_generic_shape_model(CurrentImage, ModelID, MatchResultID,
NumMatchResult)
get_generic_shape_model_result(MatchResultID, ’all’, ’row’, RowAlign)
get_generic_shape_model_result(MatchResultID, ’all’, ’column’,
ColumnAlign)
get_generic_shape_model_result(MatchResultID, ’all’, ’angle’,
AngleAlign)
align_metrology_model(MetrologyHandle, RowAlign, ColumnAlign,
AngleAlign)
Using a rigid 2D transformation:
If certain model points (given as [PRowModel], [PColumnModel]) can be clearly identified and if they can
still be clearly identified in further images in which the objects to be measured can occur shifted or rotated, a
rigid transformation can be calculated between those points. The transformation parameters can then directly
be used for aligning the model. In this case, the reference point of the metrology model does not have to be
changed.

(1) (2)
(1) The contours of the metrology object and the four corresponding points in the image that was used for the
creation of the metrology model. (2) The contours of the metrology object and the four corresponding points
in a new image.
1. Determine the point correspondences
2. Estimate the model pose
The following operator sequence calculates the parameters of the model pose (Row, Column, Angle)
from corresponding points in the model image and one other image.
Example:
vector_to_rigid(PRowModel, PColumnModel, PRowCurrent, PColumnCurrent,
HomMat2D)
hom_mat2d_to_affine_par(HomMat2D, Sx, Sy, Angle, Theta, Row, Column)
align_metrology_model(MetrologyHandle, Row, Column, Angle)

Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Row coordinate of the alignment.
Default: 0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Column coordinate of the alignment.
Default: 0
. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Rotation angle of the alignment.
Default: 0
Example

read_image (Image, 'metal-parts/circle_plate_01')

HALCON 24.11.1.0
48 CHAPTER 2 2D METROLOGY

create_metrology_model (MetrologyHandle)
get_image_size (Image, Width, Height)
set_metrology_model_image_size (MetrologyHandle, Width, Height)
CircleParam := [354,274,53]
CircleParam := [CircleParam,350,519,53]
add_metrology_object_generic (MetrologyHandle, 'circle', CircleParam, 20,\
5, 1, 30, [], [], CircleIndices)
create_generic_shape_model (ModelID)
set_generic_shape_model_param (ModelID, 'metric', 'use_polarity')
set_generic_shape_model_param (ModelID, 'min_contrast', 20)
train_generic_shape_model (Image, ModelID)
* Determine location of shape model origin
area_center (Image, Area, RowOrigin, ColOrigin)
set_metrology_model_param (MetrologyHandle, 'reference_system', \
[RowOrigin,ColOrigin,0])
read_image (CurrentImage, 'metal-parts/circle_plate_02')
find_generic_shape_model (CurrentImage, ModelID, MatchResultID, \
NumMatchResult)
get_generic_shape_model_result (MatchResultID, 'all', 'row', Row)
get_generic_shape_model_result (MatchResultID, 'all', 'column', Col)
get_generic_shape_model_result (MatchResultID, 'all', 'angle', Angle)
align_metrology_model (MetrologyHandle, Row, Col, Angle)
apply_metrology_model (CurrentImage, MetrologyHandle)
get_metrology_object_result (MetrologyHandle, CircleIndices, 'all', \
'result_type', 'all_param', Rectangle)
get_metrology_object_result_contour (Contour, MetrologyHandle, \
CircleIndices, 'all', 1.5)

Result
If the parameters are valid, the operator align_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_model_param, add_metrology_object_generic
Possible Successors
apply_metrology_model
See also
get_metrology_object_model_contour
Module
2D Metrology

apply_metrology_model ( Image : : MetrologyHandle : )

Measure and fit the geometric shapes of all metrology objects of a metrology model.

HALCON/HDevelop Reference Manual, 2024-11-13


49

apply_metrology_model locates the edges inside the measure regions of the metrology objects of the metrol-
ogy model MetrologyHandle within Image and fits the corresponding geometric shapes to the resulting edge
positions.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The measurements are performed as follows:
Determining the edge positions
Within the measure regions of the metrology objects, the positions of the edges are determined. The edge location
is calculated internally with the operator measure_pos or fuzzy_measure_pos. The latter is used if at least
one fuzzy function was set for the metrology objects with set_metrology_object_fuzzy_param.
Fitting geometric shapes to the edge positions
The geometric shapes of the metrology objects are adapted to fit optimally to the resulting edge positions. In
particular, a RANSAC algorithm is used to select a set of initial edge positions that is necessary to create an
instance of the specific geometric shape, e.g., three edge positions are selected for a metrology object of type
circle. Then, those edge positions that are near the corresponding instance of the geometric shape are de-
termined and, if the number of suitable edge positions is sufficient (see the generic parameter ’min_score’ of
set_metrology_object_param), are selected for the final fitting of the geometric shape. If the number
of suitable edge positions is not sufficient, another set of initial edge positions is tested until a suitable selection
of edge positions is found. Into the edge positions that are selected for the final fitting, the geometric shape is
fitted and its parameters are stored in the metrology model. Note that more than one instance for each metrol-
ogy object is returned if the generic parameter ’num_instances’ is set to value larger than 1. This and other
parameters can be set when adding the metrology objects to the metrology model or separately with the operator
set_metrology_object_param. Note that for each instance of the metrology object different initial edge
positions are used, i.e., a second instance is based on edge positions that were not already used for the fitting of the
first instance. The algorithm stops either when ’num_instances’ instances were found or if the remaining number
of suitable initial edge positions is too low for a further fitting of the geometric shape.
Accessing the results
The results of the measurements can be accessed from the metrology model using
get_metrology_object_result. Note that if more than one instance of an object is returned,
the order of the returned instances is arbitrary and therefore no measure for the quality of the fit-
ting. Note further that if the parameters ’camera_param’ and ’plane_pose’ were set for the metrology
model using set_metrology_model_param, world coordinates are used for the fitting. Other-
wise, image coordinates are used. The XLD contours for the measured objects can be obtained using
get_metrology_object_result_contour.
Attention
Note that all measure regions of all metrology objects must be recomputed if the width or the height
of the input Image is not equal to the width and height stored in the metrology object (e.g., set with
set_metrology_model_image_size). This leads to longer execution times of the operator.
Note further that apply_metrology_model ignores the domain of Image for efficiency reasons (see also
measure_pos).
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
Result
If the parameters are valid, the operator apply_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

HALCON 24.11.1.0
50 CHAPTER 2 2D METROLOGY

• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_metrology_object_generic, add_metrology_object_circle_measure,
add_metrology_object_ellipse_measure, add_metrology_object_line_measure,
add_metrology_object_rectangle2_measure, align_metrology_model,
set_metrology_model_param, set_metrology_object_param
Possible Successors
get_metrology_object_result, get_metrology_object_result_contour,
get_metrology_object_measures
See also
set_metrology_object_fuzzy_param, read_metrology_model, write_metrology_model
Module
2D Metrology

clear_metrology_model ( : : MetrologyHandle : )

Delete a metrology model and free the allocated memory.


clear_metrology_model deletes a metrology model that was created by create_metrology_model,
copy_metrology_model, read_metrology_model, or deserialize_metrology_model. Note
that by deleting the model also the including metrology objects are deleted. All memory used by the metrology
model and the metrology objects is freed. The handle of the model is passed in MetrologyHandle. After the
operator call, the metrology model is invalid.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
Result
The operator clear_metrology_model returns the value 2 (H_MSG_TRUE) if a valid handle was passed and
the referred metrology model can be freed correctly. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
get_metrology_object_result, write_metrology_model
Module
2D Metrology

clear_metrology_object ( : : MetrologyHandle, Index : )

Delete metrology objects and free the allocated memory.

HALCON/HDevelop Reference Manual, 2024-11-13


51

clear_metrology_object deletes in a metrology model metrology objects created, e.g, by


add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_line_measure, or add_metrology_object_rectangle2_measure.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
All memory used by the metrology objects is freed. The handle of the metrology model is passed in
MetrologyHandle. The index of the metrology objects is passed in Index. If Index is set to ’all’, all
metrology objects are deleted. After the operator call the metrology objects are invalid.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
Result
If the parameters are valid, the operator clear_metrology_object returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Module
2D Metrology

copy_metrology_model ( : : MetrologyHandle,
Index : CopiedMetrologyHandle )

Copy a metrology model.


copy_metrology_model creates a new metrology model and copies the selected metrology objects of the
input metrology model to this new output metrology model.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The input metrology model is defined by a handle MetrologyHandle. The parameter Index determines
the metrology objects that are copied. With Index set to ’all’, all metrology objects are copied. The
operator returns the handle CopiedMetrologyHandle of the new metrology model. It can be used to
save memory space. Access to the parameters of the metrology objects is possible, e.g., with the operator
get_metrology_object_param.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. CopiedMetrologyHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Handle of the copied metrology model.

HALCON 24.11.1.0
52 CHAPTER 2 2D METROLOGY

Result
If the parameters are valid, the operator copy_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
2D Metrology

create_metrology_model ( : : : MetrologyHandle )

Create the data structure that is needed to measure geometric shapes.


create_metrology_model creates a metrology model, i.e., the data structure that is needed to measure
objects with a specific geometric shape (metrology object) via 2D metrology, and returns it in the handle
MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
Attention
Note, that after calling the operator create_metrology_model the operator
set_metrology_model_image_size should be called for efficiency reasons.
Parameters
. MetrologyHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
Example

read_image (Image, 'fabrik')


create_metrology_model (MetrologyHandle)
get_image_size (Image, Width, Height)
set_metrology_model_image_size (MetrologyHandle, Width, Height)
add_metrology_object_rectangle2_measure (MetrologyHandle, 270, 230, 0, 30, \
25, 10, 2, 1, 30, [], [], Index)
apply_metrology_model (Image, MetrologyHandle)
get_metrology_object_result (MetrologyHandle, Index, 'all', 'result_type', \
'all_param', Rectangle)
get_metrology_object_result_contour (Contour, MetrologyHandle, \
Index, 'all', 1.5)

Result
If the parameters are valid, the operator create_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
set_metrology_model_image_size

HALCON/HDevelop Reference Manual, 2024-11-13


53

Module
2D Metrology

deserialize_metrology_model (
: : SerializedItemHandle : MetrologyHandle )

Deserialize a serialized metrology model.


deserialize_metrology_model deserializes a metrology model, that was serialized by
serialize_metrology_model (see fwrite_serialized_item for an introduction of the basic
principle of serialization). The serialized metrology model is defined by the handle SerializedItemHandle.
The deserialized values are stored in an automatically created metrology model with the handle
MetrologyHandle. Access to the parameters of the metrology model is possible, e.g., with the opera-
tors get_metrology_object_param or get_metrology_object_fuzzy_param.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. MetrologyHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
Result
If the parameters are valid, the operator deserialize_metrology_model returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_metrology_model
Possible Successors
get_metrology_object_param, get_metrology_object_fuzzy_param,
apply_metrology_model
Module
2D Metrology

get_metrology_model_param ( : : MetrologyHandle,
GenParamName : GenParamValue )

Get parameters that are valid for the entire metrology model.
get_metrology_model_param queries parameters that are valid for the entire metrology model.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle.
The following generic parameter names for GenParamName are possible:

’camera_param’: The internal camera parameters that are set for the metrology model.
’plane_pose’: The 3D pose of the measurement plane that is set for the metrology model. The 3D pose is given in
camera coordinates.
’reference_system’: The rotation and translation of the current reference coordinate system with respect to the
image coordinate system. The tuple returned in GenParamValue contains [row, column, angle].

HALCON 24.11.1.0
54 CHAPTER 2 2D METROLOGY

’scale’: The scaling factor or unit of the results of the measurement returned by
get_metrology_object_result.

Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter.
Default: ’camera_param’
List of values: GenParamName ∈ {’camera_param’, ’plane_pose’, ’scale’, ’reference_system’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Value of the generic parameter.
Result
If the parameters are valid, the operator get_metrology_model_param returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
get_metrology_object_indices, set_metrology_model_param
Possible Successors
get_metrology_object_param
See also
get_metrology_object_param, get_metrology_object_num_instances
Module
2D Metrology

get_metrology_object_fuzzy_param ( : : MetrologyHandle, Index,


GenParamName : GenParamValue )

Get a fuzzy parameter of a metrology model.


get_metrology_object_param allows to access the fuzzy parameters of metrology objects.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index specifies for which
metrology objects the information is accessed. With Index set to ’all’, the parameters of all metrology objects
are accessed. The names of the desired parameters are passed in the generic parameter GenParamName, the
corresponding values are returned in GenParamValue in the same order. All these fuzzy parameters can be set
and changed at any time with the operator set_metrology_object_fuzzy_param.
The following parameters can be accessed:

’fuzzy_thresh’: The meaning and the use of this parameter is equivalent to the parameter FuzzyThresh of the
operator fuzzy_measure_pos and is described there.
’function_contrast’: With this parameter the fuzzy function of type contrast that is set in the operator
set_metrology_object_param can be queried. The meaning and the use of this parameter is equiv-
alent to the parameter SetType with the value ’contrast’ of the operator set_fuzzy_measure and is
described there. The return value GenParamValue contains the function of the metrology object.
’function_position’: With this parameter the fuzzy function of type position that is set in the operator
set_metrology_object_param can be queried. Because only one fuzzy function of a type can be set,
only the last set function can be returned. The type can be ’function_position’, ’function_position_center’,
’function_position_end’, ’function_position_first_edge’, or ’function_position_last_edge’.

HALCON/HDevelop Reference Manual, 2024-11-13


55

Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: ’fuzzy_thresh’
List of values: GenParamName ∈ {’function_contrast’, ’function_position’, ’fuzzy_thresh’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer
Values of the generic parameters.
Result
If the parameters are valid, the operator get_metrology_object_fuzzy_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
get_metrology_object_indices, set_metrology_object_fuzzy_param
Possible Successors
set_metrology_object_fuzzy_param
See also
get_metrology_object_param
Module
2D Metrology

get_metrology_object_indices ( : : MetrologyHandle : Indices )

Get the indices of the metrology objects of a metrology model.


get_metrology_object_indices allows to access the indices of the metrology objects.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The operator
get_metrology_object_indices returns the indices of the metrology object in the parame-
ter Indices. Access to the parameters of the metrology object is possible, e.g., with the operator
get_metrology_object_param.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Indices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Indices of the metrology objects.
Result
If the parameters are valid, the operator get_metrology_object_indices returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

HALCON 24.11.1.0
56 CHAPTER 2 2D METROLOGY

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
read_metrology_model
Possible Successors
get_metrology_object_param, get_metrology_object_fuzzy_param
See also
get_metrology_object_num_instances
Module
2D Metrology

get_metrology_object_measures ( : Contours : MetrologyHandle,


Index, Transition : Row, Column )

Get the measure regions and the results of the edge location for the metrology objects of a metrology model.
get_metrology_object_measures allows to access the measure regions of the metrology objects that were
created with add_metrology_object_generic, add_metrology_object_circle_measure, etc.
as XLD contours and the results of the edge location in image coordinates that was performed by
apply_metrology_model.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index determines for which
metrology objects the information is accessed. With Index set to ’all’, the measure regions and the results of the
edge location for all metrology objects are accessed.
If positive and negative edges are available in the measure regions (see the generic parameter value ’mea-
sure_transition’ of the operator set_metrology_object_param), with the parameter Transition the
desired edges (positive or negative) can be selected. If Transition is set to ’positive’, only positive edges are
returned. If Transition is set to ’negative’, only negative edges are returned. All edges are returned if the
parameter Transition is set to ’all’.
The operator get_metrology_object_measures returns for each measure region one rectangular
XLD contour with the boundary of the measure region in the parameter Contours. After calling
apply_metrology_model, additionally the image coordinates of the results of the edge location are returned
as single points in the parameters Row and Column. Note that the order for the values of these points is not de-
fined. Furthermore, there is no possibility to assign the results of the edge location to specific measure regions. If
get_metrology_object_measures is called before apply_metrology_model, the parameters Row
and Column remain empty.
Parameters

. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; object


Rectangular XLD Contours of measure regions.
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Select light/dark or dark/light edges.
Default: ’all’
List of values: Transition ∈ {’all’, ’negative’, ’positive’}
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinates of the measured edges.

HALCON/HDevelop Reference Manual, 2024-11-13


57

. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real


Column coordinates of the measured edges.
Result
If the parameters are valid, the operator get_metrology_object_measures returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
apply_metrology_model
See also
add_metrology_object_generic, add_metrology_object_ellipse_measure,
add_metrology_object_line_measure, add_metrology_object_rectangle2_measure,
add_metrology_object_circle_measure
Module
2D Metrology

get_metrology_object_model_contour ( : Contour : MetrologyHandle,


Index, Resolution : )

Query the model contour of a metrology object in image coordinates.


get_metrology_object_model_contour returns the contours for the chosen metrology objects in image
coordinates.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index specifies for which
metrology objects the contours are queried. For Index set to ’all’, the contours of all metrology objects are
returned.
The form and pose of each contour is determined by the parameters set when adding the object using e.g.,
add_metrology_object_generic, add_metrology_object_circle_measure, etc. If a differ-
ent reference coordinate system was set for the metrology model using set_metrology_model_param or an
alignment of the metrology model was performed using align_metrology_model, these values influence the
current pose of the metrology objects and thus the pose of the contours returned in Contour.
The resolution of the returned Contour is controlled via Resolution containing the Euclidean distance (in
pixel) between neighboring contour points. If the input value falls below the minimal possible value (1.192e-7),
the resolution is set internally to the smallest valid value.
Parameters
. Contour (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Model contour.
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the metrology object.
Default: 0
Suggested values: Index ∈ {’all’, 0, 1, 2}
. Resolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Distance between neighboring contour points.
Default: 1.5
Restriction: Resolution >= 1.192e-7

HALCON 24.11.1.0
58 CHAPTER 2 2D METROLOGY

Result
If the parameters are valid, the operator get_metrology_object_model_contour returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_metrology_object_generic, add_metrology_object_circle_measure,
add_metrology_object_ellipse_measure,
add_metrology_object_rectangle2_measure, add_metrology_object_line_measure
Possible Successors
apply_metrology_model
See also
set_metrology_model_param, get_metrology_object_measures,
align_metrology_model
Module
2D Metrology

get_metrology_object_num_instances ( : : MetrologyHandle,
Index : NumInstances )

Get the number of instances of the metrology objects of a metrology model.


get_metrology_object_num_instances allows to access the number of instances (results) of mea-
surements applied by apply_metrology_model for the metrology objects. Note that by default, the
maximum number of instances of each metrology object is set to 1. Thus, by default, the result of
get_metrology_object_num_instances will typically be 1 as well. To allow more instances, be-
fore applying the measurement with apply_metrology_model you have to explicitly set the parameter
’num_instances’ to a higher value or to ’all’ using set_metrology_object_param.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index specifies for which
metrology object the instances are queried. For Index set to ’all’, the number of instances of all metrology objects
are returned. The number of instances is returned in NumInstances for each metrology object that was passed
in Index.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the metrology objects.
Default: 0
Suggested values: Index ∈ {’all’, 0, 1, 2}
. NumInstances (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; real / integer
Number of Instances of the metrology objects.
Result
If the parameters are valid, the operator get_metrology_object_num_instances returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON/HDevelop Reference Manual, 2024-11-13


59

• Processed without parallelization.


Possible Predecessors
apply_metrology_model
Possible Successors
clear_metrology_model
See also
get_metrology_object_indices
Module
2D Metrology

get_metrology_object_param ( : : MetrologyHandle, Index,


GenParamName : GenParamValue )

Get one or several parameters of a metrology model.


get_metrology_object_param allows to access the parameters that are used by a metrology object.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index determines for
which metrology objects the information is accessed. With Index set to ’all’, the parameters of all metrology
objects are accessed. The names of the desired parameters are passed in the generic parameter GenParamName,
the corresponding values are returned in GenParamValue in the same order. All these general parame-
ters can be set and changed at any time with the operator set_metrology_object_param. Parame-
ters that describe the geometry of an object can only be set by creating the metrology object with the oper-
ators add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_line_measure, or add_metrology_object_rectangle2_measure.
The following parameters can be accessed:

• Valid for all types of metrology objects:


’min_score’, ’num_instances’, ’instances_outside_measure_regions’: The meaning and the use of these pa-
rameters is described with the operator set_metrology_object_param.
’rand_seed’, ’distance_threshold’, ’max_num_iterations’: The meaning and the use of these parameters is
described with the operator set_metrology_object_param.
’measure_length1’, ’measure_length2’: The meaning and the use of these parameters is described with the
operator set_metrology_object_param.
’measure_sigma’, ’measure_threshold’, ’measure_transition’, ’measure_select’: The meaning and the use
of these parameters is described with the operator measure_pos by the parameters Sigma,
Threshold, Transition, and Select.
’measure_interpolation’: The meaning and the use of this parameter is described with the operator
gen_measure_rectangle2 by the parameter Interpolation.
’measure_distance_min’: Returns the minimum distance between the centers of the generated mea-
sure regions, which depends on the geometry of the object and the value of the input pa-
rameter ’measure_distance’ or the value of the input parameter ’num_measures’ of the operator
set_metrology_object_param. For a metrology object circle or a metrology object line the
distances between measure regions are uniformly distributed. Therefore, ’measure_distance_min’ and
’measure_distance_max’ return the same value.
’measure_distance_max’: Returns the maximum distance between the centers of the generated mea-
sure regions, which depends on the geometry of the object and the value of the input pa-
rameter ’measure_distance’ or the value of the input parameter ’num_measures’ of the operator
set_metrology_object_param. For a metrology object circle or a metrology object line the
distances between measure regions are uniformly distributed. Therefore, ’measure_distance_min’ and
’measure_distance_max’ return the same value.
’num_measures’: Returns the number of generated measure regions, which depends on the geometry of
the object and the value of the input parameter ’measure_distance’ or the value of the input parameter
’num_measures’ of the operator set_metrology_object_param.

HALCON 24.11.1.0
60 CHAPTER 2 2D METROLOGY

’object_type’: Type of the geometric shape of the metrology object. For a metrology object of type circle, the
output parameter GenParamValue contains the value ’circle’. For a metrology object of type ellipse,
the output parameter GenParamValue contains the value ’ellipse’. For a metrology object of type
line, the output parameter GenParamValue contains the value ’line’. For a metrology object of type
rectangle, the output parameter GenParamValue contains the value ’rectangle’.
’object_params’: The parameters of the geometric shape of the metrology object. For a metrology object of
type circle, the output parameter GenParamValue contains the geometry of the circle in the following
order: ’row’, ’column’, ’radius’. The meaning and the use of these parameters is described with the
operator add_metrology_object_circle_measure. For a metrology object of type ellipse,
the output parameter GenParamValue contains the geometry of the ellipse in the following order:
’row’, ’column’, ’phi’, ’radius1’, ’radius2’. The meaning and the use of these parameters is described
with the operator add_metrology_object_ellipse_measure. For a metrology object of type
line, the output parameter GenParamValue contains the geometry of the line in the following order:
’row_begin’, ’column_begin’, ’row_end’, ’column_end’. The meaning and the use of these parameters
is described with the operator add_metrology_object_line_measure. For a metrology object
of type rectangle, the output parameter GenParamValue contains the geometry of the rectangle in
the following order: ’row’, ’column’, ’phi’, ’length1’, ’length2’. The meaning and the use of these
parameters is described with the operator add_metrology_object_rectangle2_measure.
• Only valid for a metrology object of type circle:
’row’, ’column’, ’radius’: These are parameters for a metrology object of type cir-
cle. The meaning and the use of these parameters is described with the operator
add_metrology_object_circle_measure.
• Only valid for a metrology object of type ellipse:
’row’, ’column’, ’phi’, ’radius1’, ’radius2’: These are parameters for a metrology object of type el-
lipse. The meaning and the use of these parameters is described with the operator
add_metrology_object_ellipse_measure.
• Only valid for a metrology object of type line:
’row_begin’, ’column_begin’, ’row_end’, ’column_end’: These are parameters for a metrology object
of type line. The meaning and the use of these parameters is described with the operator
add_metrology_object_line_measure.
• Only valid for a metrology object of type rectangle:
’row’, ’column’, ’phi’, ’length1’, ’length2’: These are parameters for a metrology object of type rect-
angle. The meaning and the use of these parameters is described with the operator
add_metrology_object_rectangle2_measure.

Parameters

. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle


Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: ’num_measures’
List of values: GenParamName ∈ {’column’, ’column_begin’, ’column_end’, ’distance_threshold’,
’end_phi’, ’instances_outside_measure_regions’, ’length1’, ’length2’, ’max_num_iterations’,
’measure_distance_min’, ’measure_distance_min’, ’measure_interpolation’, ’measure_length1’,
’measure_length2’, ’measure_select’, ’measure_sigma’, ’measure_threshold’, ’measure_transition’,
’min_score’, ’num_instances’, ’num_measures’, ’object_params’, ’object_type’, ’phi’, ’point_order’, ’radius’,
’radius1’, ’radius2’, ’rand_seed’, ’row’, ’row_begin’, ’row_end’, ’start_phi’, ’x’, ’y’, ’x_begin’, ’y_begin’,
’x_end’, ’y_end’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string / real / integer
Values of the generic parameters.

HALCON/HDevelop Reference Manual, 2024-11-13


61

Result
If the parameters are valid, the operator get_metrology_object_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
get_metrology_object_indices, set_metrology_object_param
Possible Successors
set_metrology_object_param
See also
get_metrology_object_fuzzy_param, get_metrology_object_num_instances
Module
2D Metrology

get_metrology_object_result ( : : MetrologyHandle, Index,


Instance, GenParamName, GenParamValue : Parameter )

Get the results of the measurement of a metrology model.


get_metrology_object_result allows to access the results of a measurement obtained by
apply_metrology_model for the metrology objects of the metrology model MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The parameter Index specifies for which metrology objects the results are queried. For Index set to ’all’, the
results of all metrology objects are returned. With the parameter Instance it can be specified, which instances
of the results are returned in Parameter. The results for all instances are returned by setting Instance to ’all’.
Different generic parameters can be used to control the returned values in Parameter. The generic parameter
names are passed in GenParamName. The corresponding values are passed in GenParamValue. The following
parameters and values are possible:

’result_type’: If GenParamName is set to ’result_type’, then GenParamValue allows to control how and what
results are returned for a metrology object. All measured parameters of the queried metrology object can be
queried at once, specific parameters can be queried individually or the score for the metrology object can be
queried.
’Obtaining all parameters’: If GenParamValue is set to ’all_param’, then all measured parame-
ters of a metrology object are returned. If camera parameters and a pose have been set (see
set_metrology_model_param), the results are returned in metric coordinates, otherwise in pixels.
For a circle, the return values are the coordinates of the center and the radius of the circle. The order is
[’row’, ’column’, ’radius’] or [’x’, ’y’, ’radius’] respectively.
For an ellipse, the return values are the coordinates of the center, the orientation of the major axis
’phi’, the length of the larger half axis ’radius1’, and the length of the smaller half axis ’radius2’ of the
ellipse. The order is [’row’, ’column’, ’phi’, ’radius1’, ’radius2’] or [’x’, ’y’, ’phi’, ’radius1’, ’radius2’]
respectively.
For a line, the start and end point of the line is returned. The order is [’row_begin’, ’column_begin’,
’row_end’, ’column_end’] or [’x_begin’, ’y_begin’, ’x_end’, ’y_end’]
For a rectangle, the return values are the coordinates of the center, the orientation of the main axis
’phi’, the length of the larger half edge ’length1’, and the length of the smaller half edge ’length2’ of
the rectangle. The order is [’row’, ’column’, ’phi’, ’length1’, ’length2’] or [’x’, ’y’, ’phi’, ’length1’,
’length2’] respectively.
’Obtaining specific parameters’: Measured object parameters can also be queried individually by providing
the desired parameter name in GenParamName.

HALCON 24.11.1.0
62 CHAPTER 2 2D METROLOGY

When no camera parameters and no measurement plane are set, the following parameters can be queried
individually, depending on whether they are available for the respective object. Note that for lines,
additionally the 3 parameters of the hessian normal form can be queried, i.e., the unit normal vector
’nrow’, ’ncolumn’ and the orthogonal distance ’distance’ of the line from the origin of the coordinate
system. The sign of the distance determines the side of the line on which the origin is located.
List of values: ’row’, ’column’, ’radius’, ’phi’ , ’radius1’, ’radius2’, ’length1’, ’length2’, ’row_begin’,
’column_begin’, ’row_end’, ’column_end’, ’nrow’, ’ncolumn’, ’distance’
If camera parameters and a measurement plane was set, the parameters are returned in metric coordi-
nates, the following parameters can be queried individually, depending on whether they are available for
the respective object. Note that for lines, additionally the 3 parameters of the hessian normal form can
be queried, i.e., the unit normal vector ’nx’, ’ny’ and the orthogonal distance ’distance’ of the line from
the origin of the coordinate system. The sign of the distance determines the side of the line on which the
origin is located.
List of values: ’x’, ’y’, ’radius’, ’phi’ , ’radius1’, ’radius2’, ’length1’, ’length2’, ’radius1’, ’radius2’,
’length1’, ’length2’, ’x_begin’, ’y_begin’, ’x_end’, ’y_end’, ’nx’, ’ny’, ’distance’
’Obtaining the score’: If GenParamName is set to the ’score’, the fitting scores are returned. The score
represents the number of measurements that are used for the calculation of the results divided by the
maximum number of measure regions.
’used_edges’: To query the edge points, that were actually used for a fitted metrology object, you can choose
between following values for GenParamValue:
’row’: Return the row coordinate of the edges that were used to fit the metrology object.
’column’: Return the column coordinate of the edges that were used to fit the metrology object.
’amplitude’: Return the edge amplitude of the edges that were used to fit the metrology object.
List of values: ’row’, ’column’, ’amplitude’
’angle_direction’: The parameter determines the rotation direction for angles that result from the fitting. Setting
the parameter ’angle_direction’ to ’positive’ the angle is specified between the main axis of the object and the
horizontal axis of the coordinate system in the mathematically positive direction (counterclockwise). Setting
the parameter ’angle_direction’ to ’negative’ the angle is specified between the main axis of the object and
the horizontal axis of the coordinate system in the mathematically negative direction (clockwise). The results
of the angles are returned in radians.
List of values: ’positive’, ’negative’
Default: ’positive’

It is possible to query the results of several metrology objects (see the parameter Index) and several instances
(see the parameter Instance) of the metrology objects simultaneously. The results are returned in the following
order in Parameter: 1st instance of 1st metrology object, 2nd instance of 1st metrology object, etc., 1st instance
of 2nd metrology object, 2nd instance of 2nd metrology object, etc.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the metrology object.
Default: 0
Suggested values: Index ∈ {’all’, 0, 1, 2}
. Instance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Instance of the metrology object.
Default: ’all’
Suggested values: Instance ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Name of the generic parameter.
Default: ’result_type’
List of values: GenParamName ∈ {’result_type’, ’angle_direction’, ’used_edges’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string / real
Value of the generic parameter.
Default: ’all_param’
Suggested values: GenParamValue ∈ {’all_param’, ’score’, ’true’, ’false’, ’row’, ’column’, ’amplitude’,
’radius’, ’phi’, ’radius1’, ’radius2’, ’length1’, ’length2’, ’row_begin’, ’column_begin’, ’row_end’,

HALCON/HDevelop Reference Manual, 2024-11-13


63

’column_end’, ’nrow’, ’ncolumn’, ’distance’, ’x’, ’y’, ’x_begin’, ’y_begin’, ’x_end’, ’y_end’, ’nx’, ’ny’,
’positive’, ’negative’}
. Parameter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Result values.
Result
If the parameters are valid, the operator get_metrology_object_result returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
apply_metrology_model
Possible Successors
clear_metrology_model
See also
get_metrology_object_result_contour, get_metrology_object_measures
Module
2D Metrology

get_metrology_object_result_contour (
: Contour : MetrologyHandle, Index, Instance, Resolution : )

Query the result contour of a metrology object.


get_metrology_object_result_contour returns for the chosen metrology objects and object instances,
the result contours of a measurement performed by apply_metrology_model in image coordinates.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index specifies for which
metrology objects the result contours are queried. For Index set to ’all’, the result contours of all metrology
objects are returned. If for a metrology object several results (instances) were computed, then the parameter
Instance specifies, for which instances the result contours are returned in Contour. The result contours for
all instances are obtained by setting Instance to ’all’.
The resolution of the resulting contour Contour is controlled via Resolution containing the Euclidean dis-
tance between neighboring contour points in pixel. If the input value falls below the minimal possible value
(1.192e-7), then the resolution is set internally to the smallest valid value.
Parameters

. Contour (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object


Result contour for the given metrology object.
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the metrology object.
Default: 0
Suggested values: Index ∈ {’all’, 0, 1, 2}
. Instance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Instance of the metrology object.
Default: ’all’
Suggested values: Instance ∈ {’all’, 0, 1, 2}

HALCON 24.11.1.0
64 CHAPTER 2 2D METROLOGY

. Resolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real


Distance between neighboring contour points.
Default: 1.5
Restriction: Resolution >= 1.192e-7
Result
If the parameters are valid, the operator get_metrology_object_result_contour returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
apply_metrology_model
See also
get_metrology_object_result, get_metrology_object_measures
Module
2D Metrology

read_metrology_model ( : : FileName : MetrologyHandle )

Read a metrology model from a file.


read_metrology_model reads a metrology model, which has been written to file with
write_metrology_model, from the file FileName. The default HALCON file extension for a metrology
model is ’mtr’. The values contained in the read metrology model are stored in a metrology model with the handle
MetrologyHandle. Access to the parameters of the metrology model is possible, e.g., with the operator
get_metrology_object_param or get_metrology_object_fuzzy_param.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
File extension: .mtr
. MetrologyHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
Result
If the parameters are valid, the operator read_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
get_metrology_object_indices, apply_metrology_model
See also
write_metrology_model
Module
2D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


65

reset_metrology_object_fuzzy_param ( : : MetrologyHandle,
Index : )

Reset all fuzzy parameters and fuzzy functions of a metrology model.


reset_metrology_object_fuzzy_param discards all fuzzy parameters and fuzzy functions of the
metrology objects that can be set by the operator set_metrology_object_fuzzy_param and restores
the default values.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index determines the
metrology objects to reset. With Index set to ’all’, all metrology objects are reset.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
Result
If the parameters are valid, the operator reset_metrology_object_fuzzy_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_object_fuzzy_param
See also
reset_metrology_object_param
Module
2D Metrology

reset_metrology_object_param ( : : MetrologyHandle, Index : )

Reset all parameters of a metrology model.


reset_metrology_object_param discards all settings of the parameters for the metrology objects that can
be set by the operator set_metrology_object_param and restores the default values.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index determines the
metrology objects to reset. With Index set to ’all’, all metrology objects are reset.

HALCON 24.11.1.0
66 CHAPTER 2 2D METROLOGY

Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
Result
If the parameters are valid, the operator reset_metrology_object_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_object_param
See also
reset_metrology_object_fuzzy_param
Module
2D Metrology

serialize_metrology_model (
: : MetrologyHandle : SerializedItemHandle )

Serialize a metrology model.


serialize_metrology_model serializes the data of a metrology model (see
fwrite_serialized_item for an introduction of the basic principle of serialization).
The same data that is written in a file by write_metrology_model is converted to a serialized item. The
metrology model is defined by the handle MetrologyHandle. The serialized metrology model is returned by
the handle SerializedItemHandle and can be deserialized by deserialize_metrology_model.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_metrology_model returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON/HDevelop Reference Manual, 2024-11-13


67

• Processed without parallelization.


Possible Predecessors
create_metrology_model, add_metrology_object_circle_measure,
add_metrology_object_ellipse_measure, add_metrology_object_line_measure,
add_metrology_object_rectangle2_measure, set_metrology_object_param,
set_metrology_object_fuzzy_param, read_metrology_model
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_metrology_model
Module
2D Metrology

set_metrology_model_image_size ( : : MetrologyHandle, Width,


Height : )

Set the size of the image of metrology objects.


set_metrology_model_image_size is used to set or change the size of the image in which the edge
detection that is related to a metrology model will be performed.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The image width must be specified by the
parameter Width. The image height must specified by the parameter Height.
Attention
Note that the operator set_metrology_model_image_size should be called before adding
metrology objects to the metrology model using the operators add_metrology_object_generic,
add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_line_measure, or add_metrology_object_rectangle2_measure.
Otherwise, all measure regions of existing metrology objects will be recomputed automatically upon calling
set_metrology_model_image_size or apply_metrology_model.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the image to be processed.
Default: 640
Suggested values: Width ∈ {128, 192, 256, 512, 640, 768, 1024, 1280, 2048}
Value range: 0 ≤ Width (lin)
Minimum increment: 1
Recommended increment: 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the image to be processed.
Default: 480
Suggested values: Height ∈ {128, 192, 256, 512, 640, 768, 1024, 1280, 2048}
Value range: 0 ≤ Height (lin)
Minimum increment: 1
Recommended increment: 16
Result
If the parameters are valid, the operator set_metrology_model_image_size returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
68 CHAPTER 2 2D METROLOGY

This operator modifies the state of the following input parameter:


• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_metrology_model
Possible Successors
set_metrology_model_param, add_metrology_object_circle_measure,
add_metrology_object_ellipse_measure, add_metrology_object_line_measure,
add_metrology_object_rectangle2_measure, add_metrology_object_generic
Module
2D Metrology

set_metrology_model_param ( : : MetrologyHandle, GenParamName,


GenParamValue : )

Set parameters that are valid for the entire metrology model.
set_metrology_model_param sets or changes parameters that are valid for the entire metrology model
MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The following values for GenParamName and GenParamValue are possible:
Calibration
If both internal camera parameters and the 3D pose of the measurement plane are set,
apply_metrology_model calculates the results in metric coordinates.

’camera_param’: Often the internal camera parameters are the result of calibrating the camera with the operator
calibrate_cameras (see Calibration for the sequence of the parameters and the underlying camera
model). It is possible to discard the internal camera parameters by setting ’camera_param’ to [].
Default: []
’plane_pose’: The 3D pose of the measurement plane in camera coordinates. It is possible to discard the pose by
setting ’plane_pose’ to [].
Default: []

Definition of a new reference system


When adding the metrology objects to the metrology model using e.g., add_metrology_object_generic,
add_metrology_object_circle_measure etc. the positions and orientations are given with respect
to the image coordinate system which has its origin in the upper left corner of the image. In some cases it
may be necessary to change the reference system with respect to which the metrology objects are given. This
is for instance the case when using a shape model to align the metrology model in a new image. The re-
sults from find_generic_shape_model can only be directly used in align_metrology_model if
the reference system of the metrology model is the same as the system in which the shape model is given (see
align_metrology_model for more details).

’reference_system’: The tuple given in GenParamValue should contain [row, column, angle]. By default the
reference system is the image coordinate system which has its origin in the top left corner. A new reference
system is defined with respect to the image coordinate system by its translation (row,colum) and its rotation
angle (angle). All components of the metrology model are converted into the new reference coordinate
system. In the following figure, the reference system of the metrology model is set to the center of the image.
set_metrology_model_param(MetrologyHandle, ’reference_system’,
[Height/2,Width/2,0])

HALCON/HDevelop Reference Manual, 2024-11-13


69

(1) (2)
(1) Several metrology objects and their contours are shown in blue. (2) The new reference system for the
metrology model is placed in the center of the image. As a consequence, the positions and orientations of
the metrology objects are moved into the reverse direction. The resulting contours of the metrology objects
are shown in blue.

Default: [0, 0, 0]

Scaling the results


The results of the measurement queried by get_metrology_object_result can be scaled by setting a
scaling factor.

’scale’: The parameter ’scale’ must be specified as the ratio of the desired unit to the original unit. If no camera
parameters are given, the default unit is pixel.
If ’camera_param’ and ’plane_pose’ are set, the original unit is determined by the coordinates of the cal-
ibration object. Standard HALCON calibration plates are defined in metric coordinates. If it was used for
the calibration, the desired unit can be set directly. The relation of units to scaling factors is given in the
following table:

Unit Scaling factor


m 1
dm 10
cm 100
mm 1000
um, microns 106

Suggested values: 1.0, 0.1, ’m’, ’cm’, ’mm’, ’microns’, ’um’


Default: 1.0

Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter.
Default: ’camera_param’
List of values: GenParamName ∈ {’camera_param’, ’plane_pose’, ’scale’, ’reference_system’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Value of the generic parameter.
Default: []
Suggested values: GenParamValue ∈ {1.0, 0.1, ’m’, ’cm’, ’mm’, ’microns’, ’um’}
Result
If the parameters are valid, the operator set_metrology_model_param returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

HALCON 24.11.1.0
70 CHAPTER 2 2D METROLOGY

• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_metrology_model, set_metrology_model_image_size
Possible Successors
add_metrology_object_generic, get_metrology_object_model_contour
See also
set_metrology_object_param, align_metrology_model, get_metrology_model_param
Module
2D Metrology

set_metrology_object_fuzzy_param ( : : MetrologyHandle, Index,


GenParamName, GenParamValue : )

Set fuzzy parameters or fuzzy functions for a metrology model.


set_metrology_object_param is used to set or change the fuzzy parameters or fuzzy functions of
a metrology object in order to adapt the model to a particular edge selection before applying the operator
apply_metrology_model.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index specifies the metrol-
ogy objects for which the parameters should be changed or set. The parameters of all metrology objects are set if
the parameter Index is set to ’all’.
The fuzzy parameter or the type of fuzzy function is passed in the parameter GenParamName. The correspond-
ing value or the fuzzy function is passed in the parameter GenParamValue. If at least one fuzzy function
is set, internally the operator fuzzy_measure_pos will be used when searching the objects with the op-
erator apply_metrology_model. More information about fuzzy functions can be found with the opera-
tor fuzzy_measure_pos. The following generic parameters and parameter values for GenParamName and
GenParamValue are possible:

’fuzzy_thresh’: The parameter specifies the minimum fuzzy value. The meaning and the use of this parameter
is described with the operator fuzzy_measure_pos. There, the parameter corresponds to the parameter
FuzzyThresh.
Default: 0.5
’function_contrast’: The parameter specifies a fuzzy function of type contrast. The meaning and the use of
this parameter is described with the operator set_fuzzy_measure. There, the parameter corresponds to
the parameter SetType with the value ’contrast’ and its value corresponds to the parameter Function.
Default: ’disabled’
’function_position’: The parameter specifies a fuzzy function of type position. The meaning and the use of
this parameter is described with the operator set_fuzzy_measure. There, the parameter corresponds to
the parameter SetType with the value ’position’ and its value corresponds to the parameter Function.
Default: ’disabled’
’function_position_center’: The parameter specifies a fuzzy function of type position_center. The meaning
and the use of this parameter is described with the operator set_fuzzy_measure. There, the parameter
corresponds to the parameter SetType with the value ’position’ and its value corresponds to the parameter
Function.
Default: ’disabled’
’function_position_end’: The parameter specifies a fuzzy function of type position_end. The meaning and
the use of this parameter is described with the operator set_fuzzy_measure. There, the parameter cor-
responds to the parameter SetType with the value ’position_end’ and its value corresponds to the parameter
Function.
Default: ’disabled’

HALCON/HDevelop Reference Manual, 2024-11-13


71

’function_position_first_edge’: The parameter specifies a fuzzy function of type position_first_edge. The


meaning and the use of this parameter is described with the operator set_fuzzy_measure. There, the
parameter corresponds to the parameter SetType with the value ’position_first_edge’ and its value corre-
sponds to the parameter Function.
Default: ’disabled’
’function_position_last_edge’: The parameter specifies a fuzzy function of type position_last_edge. The
meaning and the use of this parameter is described with the operator set_fuzzy_measure. There, the
parameter corresponds to the parameter SetType with the value ’position_last_edge’ and its value corre-
sponds to the parameter Function.
Default: ’disabled’

A fuzzy function is discarded if the fuzzy function value is set to ’disabled’. All pre-
viously defined fuzzy functions and fuzzy parameters can be discarded completely using
reset_metrology_object_fuzzy_param. The current configuration of the metrology objects can
be accessed with get_metrology_object_fuzzy_param. Note that if at least one fuzzy function is
specified, the operator fuzzy_measure_pos is used for the edge detection.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: ’fuzzy_thresh’
List of values: GenParamName ∈ {’function_contrast’, ’function_position’, ’function_position_center’,
’function_position_end’, ’function_position_first_edge’, ’function_position_last_edge’, ’fuzzy_thresh’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer
Values of the generic parameters.
Default: 0.5
Suggested values: GenParamValue ∈ {0.1, 0.3, 0.5, 0.6, 0.7, 0.9, 1, 2, 3, 4, 5, 10, 20}
Result
If the parameters are valid, the operator set_metrology_object_fuzzy_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
get_metrology_object_fuzzy_param
Possible Successors
apply_metrology_model, reset_metrology_object_fuzzy_param,
get_metrology_object_fuzzy_param
See also
set_metrology_object_param
Module
2D Metrology

HALCON 24.11.1.0
72 CHAPTER 2 2D METROLOGY

set_metrology_object_param ( : : MetrologyHandle, Index,


GenParamName, GenParamValue : )

Set parameters for the metrology objects of a metrology model.


set_metrology_object_param is used to set or change the different parameters of a metrology object.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index specifies
the metrology objects for which the parameters are set. The parameters of all metrology objects are set
if the parameter Index is set to ’all’. All parameters can also be set when creating a metrology ob-
ject with add_metrology_object_generic, add_metrology_object_circle_measure,
add_metrology_object_ellipse_measure, add_metrology_object_line_measure, or
add_metrology_object_rectangle2_measure. The current configuration of the metrology
model can be accessed with get_metrology_object_param. All parameters that can be set with
set_metrology_object_param can be reset with reset_metrology_object_param.
In the following all generic parameters with the default values are listed. But note that for a lot of applications
the default values are sufficient and no adjustment is necessary. The following values for GenParamName and
GenParamValue are possible - ordered by different categories:
Creating measure regions:

’measure_length1’: The value of this parameter specifies the half length of the measure regions perpendicular to
the metrology object boundary. It is equivalent to the measure tolerance. The unit of this value is pixel.
Suggested values: 10.0, 20.0, 30.0
Default: 20.0
Restriction: ’measure_length1’ >= 1.0
’measure_length2’: The value of this parameter specifies the half length of the measure regions tangential to the
metrology object boundary. The unit of this value is pixel.
Suggested values: 3.0, 5.0, 10.0
Default: 5.0
Restriction: ’measure_length2’ >= 0.0
’measure_distance’: The value of this parameter specifies the desired distance between the centers of two measure
regions. If the value leads to too few measure regions, the parameter has no influence and the number of
measure regions will be increased to the minimum required number of measure regions (circle = 3, ellipse =
5, line = 2, rectangle = 2 per side = 8). The unit of this value is pixel.
If this value is set, the parameter ’num_measures’ has no influence.
Suggested values: 5.0, 15.0, 20.0, 30.0
Default: 10.0
’num_measures’: The value of this parameter specifies the desired number of measure regions.
The minimum number of measure regions depends on the type of the metrology object:
• Line: 2 measure regions
• Circle: 3 measure regions
• Circular arc: 4 measure regions
• Ellipse: 5 measure regions
• Elliptic arc: 6 measure regions
• Rectangle: 8 measure regions (2 regions each side)
If the chosen value is too low, ’num_measures’ is automatically set to the respective minimum value.
If this value is set, the parameter ’measure_distance’ has no influence.
Suggested values: 8, 10, 16, 20, 30, 50, 100

Edge detection:

’measure_sigma’: The parameter specifies the sigma for the Gaussian smoothing. The meaning, the use, and the
default value of this parameter are described with the operator measure_pos by the parameter Sigma.
’measure_threshold’: The parameter specifies the minimum edge amplitude. The meaning, the use, and the default
value of this parameter are described with the operator measure_pos by the parameter Threshold.

HALCON/HDevelop Reference Manual, 2024-11-13


73

’measure_select’: The parameter specifies the selection of end points of the edges. The meaning, the use, and the
default value of this parameter are described with the operator measure_pos by the parameter Select.
’measure_transition’: The parameter specifies the use of dark/light or light/dark edges. The meaning and the use
of the values ’all’, ’positive’, and ’negative’ for the parameter ’measure_transition’ is described with the
operator measure_pos by the parameter Transition. Additionally, ’measure_transition’ can be set to
the value ’uniform’. Then, all positive edges (dark/light edges) and all negative edges (light/dark edges) are
detected by the edge detection but when fitting the geometric shapes, the edges with different edge types are
used separately, i.e., for each instance of a geometric shape either only the positive edges or the negative
edges are used.
The measure direction within the measure regions is from the inside to the outside of the metrology object
for objects of the types circle, ellipse, or rectangle. For metrology objects of the type line measure direction
within the measure regions is from the left to the right, seen from the first point of the line (see RowBegin
and ColumnBegin of the operator add_metrology_object_line_measure).
List of values: ’all’, ’negative’, ’positive’, ’uniform’
Default: ’all’
’measure_interpolation’: The parameter specifies the type of interpolation to be used. The meaning, the use and
the default value of this parameter is described with the operator gen_measure_rectangle2 by the
parameter Interpolation.

Fitting the geometric shapes:

’min_score’: The parameter determines what score a potential instance must at least have to be regarded as a valid
instance of the metrology object. The score is the number of detected edges that are used to compute the
results divided by the maximum number of measure regions (see apply_metrology_model). If it can
be expected that all edges of the metrology object are present, the parameter ’min_score’ can be set to a value
as high as 0.8 or even 0.9. Note that in images with a high degree of clutter or strong background texture the
parameter ’min_score’ should be set to a value not much lower than 0.7 since otherwise false instances of a
metrology object could be found.
Suggested values: 0.5, 0.7, 0.9
Default: 0.7
’num_instances’: The parameter specifies the maximum number of successfully fitted instances of each metrology
object after which the fitting will stop (see apply_metrology_model). Successfully fitted instances of
the metrology objects must have a score of at least the value of ’min_score’.
Suggested values: 1, 2, 3, 4
Default: 1
’distance_threshold’: apply_metrology_model uses a randomized search algorithm (RANSAC) to fit the
geometric shapes. An edge point is considered to be part of a fitted geometric shape, if the distance of the
edge point to the geometric shape does not exceed the value of ’distance_threshold’.
Suggested values: 0, 1.0, 2.0, 3.5, 5.0
Default: 3.5
’max_num_iterations’: The RANSAC algorithm estimates the number of iterations necessary for fitting the re-
quested geometric shape. The estimation is based on the extracted edge data and the complexity of the shape.
When setting the value of the parameter ’max_num_iterations’, an upper limit for the computed number of
iterations is defined. The number of iterations is still estimated by the RANSAC algorithm but cannot exceed
the value of ’max_num_iterations’. Setting this parameter can be helpful, if the quality of the fitting is not
as important as observing time limits. However, if ’max_num_iterations’ is set too low, the algorithm will
return low-quality or no results.
By default, ’max_num_iterations’ is set to -1, indicating that no additional upper limit is set for the number
of iterations of the RANSAC algorithm.
Suggested values: 10, 100, 1000
Default: -1
’rand_seed’: The parameter specifies the seed for the random number generator for the RANSAC algorithm that
is used by the selection of the edges the in operator apply_metrology_model. If the value of the
parameter ’rand_seed’ is set to a number unequal to the value 0, the operator yields the same result on every
call with the same parameters, because the internally used random number generator is initialized with the
value of the parameter ’rand_seed’.

HALCON 24.11.1.0
74 CHAPTER 2 2D METROLOGY

If the parameter ’rand_seed’ is set to the value 0, the random number generator is initialized with the current
time. In this case, the results are not reproducible.
Suggested values: 0, 1, 42
Default: 42
’instances_outside_measure_regions’: The parameter specifies the validation of the results of measurements. If
the value of the parameter ’instances_outside_measure_regions’ is set to the value ’false’, only result-
ing instances of an metrology object are valid that are inside the major axis of the measure regions of
this metrology object. Instances which are not valid are not stored. If the value of the parameter ’in-
stances_outside_measure_regions’ is set to the value ’true’, all instances of a metrology object are valid.
List of values: ’true’, ’false’
Default: ’false’

Parameters

. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle


Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: ’num_instances’
List of values: GenParamName ∈ {’distance_threshold’, ’instances_outside_measure_regions’,
’max_num_iterations’, ’measure_distance’, ’measure_interpolation’, ’measure_length1’, ’measure_length2’,
’measure_select’, ’measure_sigma’, ’measure_threshold’, ’measure_transition’, ’min_score’,
’num_instances’, ’num_measures’, ’rand_seed’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: 1
Suggested values: GenParamValue ∈ {1, 2, 3, 4, 5, 10, 20, ’all’, ’true’, ’false’, ’first’, ’last’, ’positive’,
’negative’, ’uniform’, ’nearest_neighbor’, ’bilinear’, ’bicubic’}
Result
If the parameters are valid, the operator set_metrology_object_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MetrologyHandle

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
get_metrology_object_param
Possible Successors
apply_metrology_model, reset_metrology_object_param,
get_metrology_object_param
See also
set_metrology_object_fuzzy_param
Module
2D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


75

write_metrology_model ( : : MetrologyHandle, FileName : )

Write a metrology model to a file.


write_metrology_model writes a metrology model to the file FileName. The metrology model is defined
by the handle MetrologyHandle. The metrology model can be read with read_metrology_model. The
default HALCON file extension for a metrology model is ’mtr’.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
Attention
Note that only the input values are saved, i.e., no measure regions and no results obtained by the operator
apply_metrology_model are saved.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .mtr
Result
If the parameters are valid, the operator write_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
apply_metrology_model
Possible Successors
clear_metrology_model
See also
read_metrology_model
Module
2D Metrology

HALCON 24.11.1.0
76 CHAPTER 2 2D METROLOGY

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 3

3D Matching

This chapter gives an overview of the different 3D matching approaches available in HALCON.
3D Box Finder
As it is already contained in its name, the box finder can be used to locate box-shaped objects in 3D data. Thereby,
no model of the object is needed as an input for the operator find_box_3d, but only the dimensions of the boxes
to be found. As a result you can retrieve the pose of a gripping point, which can be especially useful in the case of
a bin picking application.

(1) (2)
(1) 3D input data (scene), (2) found instance, including a gripping point.

Surface-Based Matching
The surface-based matching approach is suited to locate more complex objects as well. The shape of these objects
is passed to the operator find_surface_model, or find_surface_model_image respectively, in the
form of a surface model. The poses of the found object instances in the scene are then returned.
Note that there are several different approaches when using surface-based matching. For detailed explanations
regarding when and how to use these approaches, tips, tricks, and troubleshooting, have a look at the technical note
on Surface-Based Matching.

77
78 CHAPTER 3 3D MATCHING

(1) (2) (3)


(1) 3D model to be searched for, (2) 3D input data (scene), (3) matching result.

Deformable Surface-Based Matching


In case an object can occur in the scene in different, deformed states you can use a deformable sur-
face model to locate the object in the scene. If an instance of such an object is found by the operator
find_deformable_surface_model, the object model can be retrieved featuring the respective deforma-
tion and pose.

(1) (2) (3) (4)


(1) 3D object model, (2) 3D input data to be searched (scene), (3) model, transformed into the matched pose, (4)
deformed object model.

Shape-Based Matching
With shape-based matching, instances of a 3D CAD model are searched in 2D images instead of 3D point
clouds. For this, the edges of the wanted object need to be clearly visible in the image and the used cam-
era needs to be calibrated beforehand. As a result, the object pose is computed and returned by the operator
find_shape_model_3d.

(1) (2) (3)


(1) 3D shape model, (2) input image, (3) found shape model, projected into the image.

3D Gripping Point Detection


3D Gripping Point Detection is a deep-learning-based approach to detect gripping points on arbitrary objects in a
3D scene. For further information please see the chapter 3D Matching / 3D Gripping Point Detection.

HALCON/HDevelop Reference Manual, 2024-11-13


3.1. 3D BOX 79

(1) (2) (3) (4)


(1) 2D input image (intensity image), (2) 3D scene (generated from XYZ-images), (3) Visualization of estimated
gripping points on the 2D input image, (4) Visualization of estimated gripping points as poses in the 3D scene.

Deep 3D Matching
Deep 3D Matching is a deep-learning-based approach to detect objects in a scene and compute their 3D pose. For
further information please see the chapter 3D Matching / Deep 3D Matching.

(1) (2) (3)


(1) Input scenes for an object, (2) computed 3D poses of the object in 2D image, (3) computed 3D poses of the
object in 3D plot.

3.1 3D Box

find_box_3d ( : : ObjectModel3DScene, SideLen1, SideLen2, SideLen3,


MinScore, GenParam : GrippingPose, Score, ObjectModel3DBox,
BoxInformation )

Find boxes in 3D data.


find_box_3d finds boxes in the 3D object model ObjectModel3DScene and returns the pose of a grip-
ping point GrippingPose, a 3D object model ObjectModel3DBox a score value Score and a dictionary
BoxInformation, containing further information about the found boxes, for each found box.
The side lengths of the boxes are passed in SideLen1, SideLen2, and SideLen3. Each length consists of a
tuple of two values, indicating the minimum and maximum length of that side. If only a single face of the box is
expected to be visible, or no restriction should be applied to the remaining box length, SideLen3 can be set to
-1.
The parameter MinScore sets the minimum score for boxes to be returned. Boxes with a score smaller than this
value will not be returned.
ObjectModel3DScene must contain a XYZ-mapping, like it is, e.g., the case, when creating it with
xyz_to_object_model_3d.
Typical Workflow
A typical workflow for detecting 3D boxes in 3D data looks as follows:

1. Obtain the 3D data either as XYZ-images, or directly as a 3D object model with XYZ-mapping.

HALCON 24.11.1.0
80 CHAPTER 3 3D MATCHING

2. Remove as much background and clutter that is not part of any box from the scene as possible, in order to in-
crease robustness and speed. Therefore, e.g., use threshold and reduce_domain on the XYZ-images
before calling xyz_to_object_model_3d. Further options are described in the section “Troubleshoot-
ing” below.
3. If the 3D data exists in the form of XYZ-images, convert them to a 3D object model using
xyz_to_object_model_3d.
4. Obtain the approximate box edge lengths that should be found. Note that changing those lengths later on
might make it necessary to also change other parameters, such as MinScore.
5. Call find_box_3d, passing the 3D object model with the scene and the approximate box edge lengths.
6. Use the procedure visualize_object_model_3d to visualize the results, if necessary.

Understanding the Results


The boxes are returned in several ways.
First, the pose of a gripping point is returned in GrippingPose. The used side of the box and the z-axis of
the gripping pose are set according to the XYZ-mapping. If only a single side of the box is visible, the center
of the gripping pose is in the center of that side, and its z-axis is oriented away from the viewing point of the
XYZ-mapping. If multiple sides of the box are visible, the gripping pose lies in the center of the side that is most
parallel to the viewing point of the XYZ-mapping. The z-axis is again oriented away from the viewing point of
the XYZ-mapping. The x-axis of the gripping pose is set to the box axis that is roughly the most aligned with the
column direction of the XYZ-mapping. The y-axis is computed based on the x- and z-axis.
The box is also returned in triangulated form in ObjectModel3DBox. This allows a quick visualization of the
results.
For each found box, a score between 0 and 1 is returned in Score. The score indicates how well the box and its
edges are visible, and how well the found box matches the specified dimensions.
Finally, additional information about the results is returned in the dictionary BoxInformation.
get_dict_param and get_dict_tuple can be used to obtain further information about the results. Also,
the HDevelop handle inspect window can be used to inspect the returned dictionary.
The dictionary BoxInformation contains the following keys:

results: This key references a dictionary containing the found boxes. They are sorted according to their score
in descending order with ascending integer keys starting at 0.
Each box result is a dictionary with the following keys:
box_pose: This is the box’s pose in the coordinate system of the scene. This pose is used for visualizing
the found box.
box_length_x, box_length_y, box_length_z: The side lengths of the found box corresponding
to box_pose. box_length_x and box_length_y will always contain a positive number. If only
a single side of the box is visible, box_length_z will be set to 0.
gripping_pose: The same pose as returned in GrippingPose.
gripping_length_x, gripping_length_y, gripping_length_z: The side lengths of the
found box corresponding to GrippingPose. gripping_length_x and gripping_length_y
will always contain a positive number. If only a single side of the box is visible,
gripping_length_z will be set to 0.
score: The same score as returned in Score.
one_side_only: Boolean indicating whether only one side of the box is visible (’true’) or not (’false’).
gen_param: This is a dictionary with the parameters passed to find_box_3d. SideLen1, SideLen2, and
SideLen3 are pooled in a tuple with key lengths. The key min_score references MinScore. The
other keys are denoted analogously to the generic parameters of the dictionary GenParam.
sampled_edges: This is the 3D object model with sampled edges. It contains the viewing direction of the edge
points as normal vectors.
sampled_edges_direction: This is the 3D object model with sampled edges (same as for key
sampled_edges. It contains the edge directions of the edge points as normal vectors.
sampled_scene: This is the sampled scene in which the boxes are looked for. It can be used for visualization
or debugging the sampling distance.

HALCON/HDevelop Reference Manual, 2024-11-13


3.1. 3D BOX 81

sampled_reference_points: This is a 3D object model with all points from the 3D scene that were used as
reference points in the matching process. For each reference point, the optimum pose of the box is computed
under the assumption that the reference point lies on the surface of the box.

Generic Parameters
Additional parameters can be passed as key/tuple pairs in the dictionary GenParam in order to improve the
matching process. The following parameter names serve as keys to their corresponding tuples (see create_dict
and set_dict_tuple).

3d_edges: Allows to manually set the 3D scene edges. The parameter must be a 3D object model handle. The
edges are usually a result of the operator edges_object_model_3d but can further be filtered in order
to remove outliers. If this parameter is not given, find_box_3d will internally extract the 3D edges similar
to the operator edges_object_model_3d.
3d_edge_min_amplitude: Sets the minimum amplitude of a discontinuity in order for it to be classi-
fied as an edge. Note that if edges were passed manually with the generic parameter 3d_edges, this
parameter is ignored. Otherwise, it behaves similar to the parameter MinAmplitude of the operator
edges_object_model_3d.
Restriction: 3d_edge_min_amplitude >= 0
Default: 10% of the smallest box diagonal.
max_gap: If no edges are passed with 3d_edges, the operator will extract 3D edges internally. The parameter
can be used to control the edge extraction.
max_gap has the same meaning as in edges_object_model_3d.
remove_outer_edges: Removes the outermost edges when set to ’true’. This is for example helpful for bin
picking applications in order to remove the bin.
List of values: ’false’, ’true’
Default: ’false’
max_num_boxes: Limits the number of returned boxes. By default, find_box_3d will return all detected
boxes with a score larger than MinScore. This parameter can be used to limit the number of boxes respec-
tively.
Default: 0 (return all boxes)
box_type: Sets the type of boxes to search for. For ’full_box_visible’ only boxes with more than one side visible
are returned. If ’single_side_visible’ is set, boxes with only one visible side are searched for. If further box
sides are visible nonetheless, they are ignored. For ’all’ both types are returned.
List of values: ’all’, ’single_side_visible’, ’full_box_visible’
Default: ’all’

Troubleshooting

Visualizing extracted edges and sampled scene: To debug the box detector, some of the internally used data can
be visualized by obtaining it from the returned dictionary BoxInformation, using get_dict_tuple.
The sampled 3D scene can be extracted with the key sampled_scene. Finding smaller boxes requires a
denser sampling and subsequently slows down the box detection.
The sampled 3D edges can be extracted with the key sampled_edges and
sampled_edges_directions. Both 3D object models contain the same points, however,
sampled_edges contains the viewing direction of the edge points as normal vectors, while
sampled_edges_directions contains the edge directions of the edge points as normal vectors.
Note that the edge directions should be perpendicular to the edges, pointing outwards of the boxes.
Improve performance: If find_box_3d is taking too long, the following steps might help to increase its per-
formance.
• Remove more background and clutter: A significant improvement in runtime and detection accuracy
can usually be achieved by removing as much of the background and clutter from the 3D scene as
possible.
The most common approaches for removing unwanted data are:
– Thresholding the X-, Y- and Z-coordinates, either by using threshold and reduce_domain
on the XYZ-images before calling xyz_to_object_model_3d, or by using
select_points_object_model_3d directly on the 3D object model that contains the
scene.

HALCON 24.11.1.0
82 CHAPTER 3 3D MATCHING

– Some sensors return an intensity image along with the 3D data. Filters on the intensity image can
be used to remove parts of the image that contain background.
– Use background subtraction. If the scene is static, for example, if the sensor is mounted in a fixed
position over a conveyor belt, the XYZ-images of the background can be acquired once without any
boxes in it. Afterwards, sub_image and threshold can be used on the Z-images to select parts
of the 3D data that are not part of the background.
• Increase minimum score: An increased minimum score MinScore might lead to more boxes being
removed earlier in the detection pipeline.
• Increase the smallest possible box: The smaller the smallest possible box side is, the slower
find_box_3d runs. For example, if all boxes are usually seen from a single side, it might make
sense to set SideLen3 to -1. Additionally, box_type can be set to limit the type of boxes that are
searched.
• Manually computing and filtering edges: The edges of the scene can be extracted manually, using
edges_object_model_3d, and passed to find_box_3d using the generic parameter 3d_edges
(see above). Thus, the manual extraction can be used as a further way of filtering the edges.

Parameters

. ObjectModel3DScene (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle


Handle of 3D object model where to search the box.
. SideLen1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Length of the first box side.
. SideLen2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Length of the second box side.
. SideLen3 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Length of the third box side.
Default: -1
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Minimum score of the returned boxes.
Default: 0.6
Restriction: 0 <= MinScore <= 1
. GenParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Dictionary for generic parameters.
Default: []
. GrippingPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Gripping poses of the detected boxes.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Scores of the detected boxes.
. ObjectModel3DBox (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d-array ; handle
Detected boxes as triangulated 3D object models.
. BoxInformation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Additional debug information as dictionary.
Result
If all parameters are valid and no error occurs, find_box_3d returns 2 (H_MSG_TRUE). If necessary, an excep-
tion is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
gen_box_object_model_3d, get_dict_tuple

HALCON/HDevelop Reference Manual, 2024-11-13


3.2. 3D GRIPPING POINT DETECTION 83

Alternatives
find_surface_model
Module
3D Metrology

3.2 3D Gripping Point Detection


This chapter explains how to use 3D Gripping Point Detection.
3D Gripping Point Detection is used to find suitable gripping points on the surface of arbitrary objects in a 3D
scene. The results can be used to target the gripping points with a robot arm and pick up the objects using vacuum
grippers with suction cups.

A possible example for a 3D Gripping Point Detection application: A 3D scene (e.g., an RGB image and
XYZ-images) is analyzed and possible gripping points are suggested.

HALCON provides a pretrained model which is ready for inference without an additional training step. To finetune
the model for a specific task, it is possible to retrain it on a custom application domain. 3D Gripping Point Detection
also works on objects that were not seen in training. Thus, there is no need to provide a 3D model of the objects
that are to be targeted. 3D Gripping Point Detection can also cope with scenes containing various different objects
at once, scenes with partly occluded objects, and with scenes containing cluttered 3D data.
The general inference workflow as well as the retraining are described in the following sections.
General Inference Workflow
This paragraph describes how to determine a suitable gripping point on arbitrary object surfaces using
a 3D Gripping Point Detection model. An application scenario can be seen in the HDevelop example
3d_gripping_point_detection_workflow.hdev.

1. Read the pretrained 3D Gripping Point Detection model by using

• read_dl_model.

2. Set the model parameter regarding, e.g., the used devices or image dimensions using

• set_dl_model_param.

3. Generate a data dictionary DLSample for each 3D scene. This can be done using the procedure

• gen_dl_samples_3d_gripping_point_detection,

which can cope with different kinds of 3D data. For further information on the data requirements see the
section “Data” below.
4. Preprocessing of the data before the inference. For this, you can use the procedure

• preprocess_dl_samples.

The required preprocessing parameters can be generated from the model with

HALCON 24.11.1.0
84 CHAPTER 3 3D MATCHING

• create_dl_preprocess_param_from_model

or set manually using

• create_dl_preprocess_param.

Note that the preprocessing of the data has significant impact on the inference. See the section “3D scenes”
below for further details.
5. Apply the model using the operator

• apply_dl_model.

6. Perform a post-processing step on the resulting DLResult to retrieve gripping points for your scene using
the procedure

• gen_dl_3d_gripping_points_and_poses.

7. Visualize the 2D and 3D results using the procedure

• dev_display_dl_data or
• dev_display_dl_3d_data, respectively.

Training and Evaluation of the Model


This paragraph describes how the 3D Gripping Point Detection model can be retrained and
evaluated using custom data. An application scenario can be seen in the HDevelop example
3d_gripping_point_detection_training_workflow.hdev.

Preprocess the data This part is about how to preprocess your data.

1. The information content of your dataset needs to be converted. This is done by the procedure
• read_dl_dataset_3d_gripping_point_detection.
It creates a dictionary DLDataset which serves as a database and stores all necessary information
about your data. For more information about the data and the way it is transferred, see the section
“Data” below and the chapter Deep Learning / Model.
2. Split the dataset represented by the dictionary DLDataset. This can be done using the procedure
• split_dl_dataset.
3. The network imposes several requirements on the images. These requirements (for example the image
size and gray value range) can be retrieved with
• get_dl_model_param.
For this you need to read the model first by using
• read_dl_model.
4. Now you can preprocess your dataset. For this, you can use the procedure
• preprocess_dl_dataset.
To use this procedure, specify the preprocessing parameters as, e.g., the image size. Store all the pa-
rameter with their values in a dictionary DLPreprocessParam, for which you can use the procedure
• create_dl_preprocess_param_from_model.
We recommend to save this dictionary DLPreprocessParam in order to have access to the prepro-
cessing parameter values later during the inference phase.

Training of the model This part explains the finetuning of the 3D Gripping Point Detection model by retraining
it.

1. Set the training parameters and store them in the dictionary TrainParam. This can be done using the
procedure

HALCON/HDevelop Reference Manual, 2024-11-13


3.2. 3D GRIPPING POINT DETECTION 85

• create_dl_train_param.
2. Train the model. This can be done using the procedure
• train_dl_model.
The procedure expects:
• the model handle DLModelHandle,
• the dictionary DLDataset containing the data information,
• the dictionary TrainParam containing the training parameters.

Evaluation of the retrained model In this part, we evaluate the 3D Gripping Point Detection model.

1. Set the model parameters which may influence the evaluation.


2. The evaluation can be done conveniently using the procedure
• evaluate_dl_model.
This procedure expects a dictionary GenParam with the evaluation parameters.
3. The dictionary EvaluationResult holds the evaluation measures. To get a clue on how the re-
trained model performed against the pretrained model you can compare their evaluation values. To
understand the different evaluation measures, see section “Evaluation Measures for 3D Gripping Point
Detection Results”.

Data
This section gives information on the data that needs to be provided for the model inference or training and
evaluation of a 3D Gripping Point Detection model.
As a basic concept, the model handles data by dictionaries, meaning it receives the input data from a dictionary
DLSample and returns a dictionary DLResult. More information on the data handling can be found in the
chapter Deep Learning / Model.

3D scenes 3D Gripping Point Detection processes 3D scenes, which consist of regular 2D images and depth
information.
In order to adapt these 3D data to the network input requirements, a preprocessing step is necessary for the
inference. See the section “Specific Preprocessing Parameters” below for information on certain preprocess-
ing parameters. It is recommended to use a high resolution 3D sensor, in order to ensure the necessary data
quality. The following data are needed:

2D image

• RGB image, or
• intensity (gray value) image

Intensity image.
Depth information

• X-image (values need to increase from left to right)


• Y-image (values need to increase from top to bottom)

HALCON 24.11.1.0
86 CHAPTER 3 3D MATCHING

• Z-image (values need to increase from points close to the sensor to far points; this is for example
the case if the data is given in the camera coordinate system)

(1) (2) (3)


(1) X-image, (2) Y-image, (3) Z-image.

Normals (optional)

• 2D mappings (3-channel image)

Normals image.

Providing normal images improves the runtime, as this avoids the need for their computation.

In order to restrict the search area, the domain of the RGB/intensity image can be reduced. For details, see
the section “Specific Preprocessing Parameters” below. Note that the domain of the XYZ-images and the
(optional) normals images need to be identical. Furthermore, for all input data, only valid pixels may be part
of the used domain.
Data for Training and Evaluation The training data is used to train and evaluate a network specifically for your
application.
The dataset needed for this consists of 3D scenes and corresponding information on possible gripping sur-
faces given as segmentation images. They have to be provided in a way the model can process them. Con-
cerning the 3D scene requirements, find more information in the section “3D scenes” above.
How the data has to be formatted in HALCON for a DL model is explained in the chapter Deep Learning /
Model. In short, a dictionary DLDataset serves as a database for the information needed by the training
and evaluation procedures.
The data for DLDataset can be read using read_dl_dataset_3d_gripping_point_detection.
See the reference of read_dl_dataset_3d_gripping_point_detection for information on the
required contents of a 3D Gripping Point Detection DLDataset.
Along with 3D scenes, segmentation images need to be provided, which function as the ground truth. The
segmentation images contain two gray values that denote every pixel in the scene to be either a valid gripping
point or not. You can label your data using the MVTec Deep Learning Tool, available from the MVTec
website.

HALCON/HDevelop Reference Manual, 2024-11-13


3.2. 3D GRIPPING POINT DETECTION 87

(1) (2)
(1) Labeling of an intensity image. (2) Segmentation image, denoting gripping points (gray).

Make sure that the whole labeled area provides robust gripping points for the robot. Consider the following
aspects when labeling your data:

• Gripping points need to be on a surface that can be accessed by the robot arm without being obstructed.
• Gripping points need to be on a surface that the robot arm can grip with its suction cup. Therefore,
consider the object’s material, shape, and surface tilt with regard to the ground plane.
• Take the size of the robots suction cup into account.
• Take the strength of the suction cup into account.
• Tend to label gripping points near the object’s center of mass (especially for potentially heavier items).
• Gripping points should not be at an object’s border.
• Gripping points should not be at the border of visible object regions.

Model output As inference output, the model will return a dictionary DLResult for every sample. This dictio-
nary includes the following entries:

• ’gripping_map’: Binary image, indicating for each pixel of the scene whether the model predicted
a gripping point (pixel value = 1.0) or not (0.0).
• ’gripping_confidence’: Image, containing raw, uncalibrated confidence values for every point
in the scene.

Evaluation Measures for 3D Gripping Point Detection Results


For 3D Gripping Point Detection, the following evaluation measures are supported in HALCON:

mean_pro Mean overlap of all ground truth regions labeled as gripping class with the predictions (Per-Region
Overlap). See the paper referenced below for a detailed description of this evaluation measure.
mean_precision Mean pixel-level precision of the predictions for the gripping class. The precision is the
proportion of true positives to all positives (true (TP) and false (FP) ones).

TP
precision =
TP + FP

mean_iou Intersection over union (IoU) between the ground truth pixels and the predicted pixels of the gripping
class. See Deep Learning / Semantic Segmentation and Edge Extraction for a detailed description of this
evaluation measure.

HALCON 24.11.1.0
88 CHAPTER 3 3D MATCHING

gripping_point_precision Proportion of true positives to all positives (true and false ones).
For this measure, a true positive is a correctly predicted gripping point, meaning the predicted point is
located within a ground truth region. However, only one gripping point per region is considered a true
positive, additional predictions in the same region are considered false positives.

gripping_point_recall The recall is the proportion of the number of correctly predicted gripping points
to the number of all ground truth regions of the gripping class.

TP
recall =
TP + FN

gripping_point_f_score To represent precision and recall with a single number, we provide the F-score,
the harmonic mean of precision and recall.

precision ∗ recall
F-score = 2 ∗
precision + recall

Postprocessing
The model results DLResult can be postprocessed with gen_dl_3d_gripping_points_and_poses in
order to generate gripping points. Furthermore, this procedure can be parameterized in order to reject small grip-
ping regions using min_area_size, or serve as a template to define custom selection criteria.
The procedure adds the following entry to the dictionary DLResult:

• ’gripping_points’: Tuple of dictionaries containing information on suitable gripping points in a


scene:

– ’region’: Connected region of potential gripping points. The determined gripping point lies inside
this region.
– ’row’: Row coordinate of the gripping point in the preprocessed RGB/intensity image.
– ’column’: Column coordinate of the gripping point in the preprocessed RGB/intensity image.
– ’pose’: 3D pose of the gripping point (relative to the coordinate system of the XYZ-images, i.e., of
the camera) which can be used by the robot.

Specific Preprocessing Parameters


In the preprocessing step, along with the data, preprocessing parameters need to be passed to
preprocess_dl_samples. Two pairs of those preprocessing parameters have particularly significant impact:

• ’image_width’, ’image_height’: Determine the image dimensions of the images to be inferred.


With larger image dimensions and thus a better resolution, smaller gripping surfaces can be detected. How-
ever, the runtime and memory consumption of the application increases.

• ’min_z’, ’max_z’: Determine the allowed distance from the camera for 3D points based on the Z-image.
These parameters can therefore help to reduce erroneous outliers and therefore increase the application
robustness.

A restriction of the search area can be done by reducing the domain of the input images (using reduce_domain).
The way preprocess_dl_samples handles the domain is set using the preprocessing parameter
’domain_handling’. The parameter ’domain_handling’ should be used in a way that only essential
information is passed on to the network for inference. The following images show how an input image with
reduced domain is passed on after the preprocessing step depending on the set ’domain_handling’.

HALCON/HDevelop Reference Manual, 2024-11-13


3.3. DEEP 3D MATCHING 89

(1) (2) (3) (4)


(1) Input image with reduced domain (red), (2) image for ’full_domain’, (3) image for ’keep_domain’, (4)
image for ’crop_domain’.

References
Bergmann, P., Batzner, K., Fauser, M., Sattlegger, D. and Steger, C., 2021. The MVTec anomaly detection dataset:
a comprehensive real-world dataset for unsupervised anomaly detection. International Journal of Computer Vision,
129(4), pp.1038-1059.

3.3 Deep 3D Matching

This chapter explains how to use Deep 3D Matching.


Deep 3D Matching is used to accurately detect objects in a scene and compute their 3D pose. This approach
is particularly effective for complex scenarios where traditional 3D matching techniques (like shape-based 3D
matching) may struggle due to variations in object appearance, occlusions, or noisy data. Compared to surface-
based matching, Deep 3D Matching works with a calibrated multi-view setup and does not require data from a 3D
sensor.

A possible example for a Deep 3D Matching application: Images from different angles are used to detect an
object. As a result the 3D pose of the object is computed.

The Deep 3D Matching model consists of two components, which are dedicated to two distinct tasks, the detection,
which localizes objects, and the estimation of object poses. For a Deep 3D Matching application, both components
need to be trained on the 3D CAD model of the object to be found in the application scenes.
Note: For now only inference is possible in HALCON, the custom training of a model will be available in a future
version of HALCON. If you want to use the feature for your applications, please contact your HALCON sales
partner for further information.
Once trained, the deep learning model can be used to infer the pose of the object in new application scenes. During
the inference process, images from different angles are used as input.
General Inference Workflow
This paragraph describes how to determine a 3D pose using the Deep 3D Matching method. An application
scenario can be seen in the HDevelop example deep_3d_matching_workflow.hdev.

1. Read the trained Deep 3D Matching model by using

• read_deep_matching_3d.

2. Optimize the deep learning network for the use with -interfaces

(a) Extract the detection network from the deep 3d matching model using

HALCON 24.11.1.0
90 CHAPTER 3 3D MATCHING

• get_deep_matching_3d_param.
(b) Optimize the parameter for inference with
• optimize_dl_model_for_inference.
(c) Set the optimized detection network using
• set_deep_matching_3d_param.
(d) Repeat these steps for the 3D pose estimation network.
(e) Save the optimized model using
• write_deep_matching_3d.
Note that the optimization of the model has significant impact on the runtime, if it is done with every
inference run. So writing the optimized model saves time in the inference.

3. Set the camera parameters using

• set_deep_matching_3d_param.

4. Apply the model using the operator

• apply_deep_matching_3d.

5. Visualize the resulting 3D poses.

Training and Evaluation of the Model


For now only inference is possible in HALCON, training of a model will be available in a future version. If you
want to use the feature for your applications, please contact your HALCON sales partner for further information.
Data
This section gives information on the camera setup and data that needs to be provided for the model inference or
training and evaluation of a Deep 3D Matching model.
As a basic concept, the model handles data by dictionaries, meaning it receives the input data from a dictionary
DLSample and returns a dictionary DeepMatchingResults. More information on the data handling can be
found in the chapter Deep Learning / Model.

Multi-View Camera Setup In order to use Deep 3D Matching with high accuracy you need a calibrated stereo
or multi-view camera setup. In comparison to stereo reconstruction, Deep 3D Matching can deal with more
strongly varying camera constellations and distances. Also there is no need to use 3D sensors in the setup.
For information how to calibrate the used setup, please refer to the chapter Calibration / Multi-View.
The objects to be detected must be captured from two or more different perspectives in order to calculate the
3D poses.

(1) (2)
Example setups for Deep 3D Matching: Scenes are recorded by several cameras, the objects to be detected
do not have to be seen by every single camera (but by at least two cameras).

HALCON/HDevelop Reference Manual, 2024-11-13


3.3. DEEP 3D MATCHING 91

Data for Training and Evaluation The training data is used to train and evaluate a Deep 3D Matching model
specifically for your application.
The required training data is generated using CAD models. Synthetic images of the object are created
from various angles, lighting conditions, and backgrounds. Note that there are no real images required, the
required data is generated based of the CAD model.
The data needed for this is a CAD model and corresponding information on material, surface finish and color.
Information about possible axial and radial symmetries can significantly improve the generated training data.

apply_deep_matching_3d (
Images : : Deep3DMatchingModel : DeepMatchingResults )

Find the pose of objects using Deep 3D Matching.


The operator apply_deep_matching_3d finds instances of the object defined in Deep3DMatchingModel
in the images Images and returns the detected instances and their 3D poses in DeepMatchingResults.
Input Images
Images must be an image array with exactly as many images as there are cameras set in the Deep 3D Match-
ing model (see set_deep_matching_3d_param). The image resolutions must match the resolution of the
corresponding camera parameters. The images must be either of type ’byte’ or ’float’, and they must have 1 or 3
channels.
Deep Learning Models
apply_deep_matching_3d uses deep learning technology for detecting the object instances. For an efficient
execution, it is strongly recommended to use appropriate hardware accelerators and to optimize the deep learning
models. See get_deep_matching_3d_param on how to obtain the deep learning models in order to set the
device on which they are executed and optimize_dl_model_for_inference for optimizing the models
for a particular hardware.
Detection Steps

1. Object Detection The object detection deep learning model is used to find instances of the target object in all
images.
2. 3D pose estimation The pose estimation deep learning model is used to estimate the 3D pose of all instances
found in the previous step. Poses of the same object found in different images are combined into a single
instance.
3. Pose Refinement The poses found in the previous step are further refined using edges visible in the image.
Additionally, their score is computed.
4. Filter Results The detected instances are filtered using the minimum score (’min_score’), the minimum num-
ber of cameras in which instances must be visible (’min_num_views’), as well es the maximum number of
instances to return (’num_matches’).

Result Format
The results are returned in DeepMatchingResults as a dictionary. The dictionary key ’results’ contains all
detected results. Each result has the following keys:

’score’:
The score of the result instance.
’pose’:
The pose of the result instance in world coordinate systems.
’cameras’:
A tuple of integers containing the camera indices in which the instance was detected in.

HALCON 24.11.1.0
92 CHAPTER 3 3D MATCHING

Parameters
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / real
Input images.
. Deep3DMatchingModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep_matching_3d ; handle
Deep 3D matching model.
. DeepMatchingResults (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict-array ; handle
Results.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Module
3D Metrology

get_deep_matching_3d_param ( : : Deep3DMatchingModel,
GenParamName : GenParamValue )

Read a parameter from a Deep 3D Matching model.


The operator get_deep_matching_3d_param returns the parameter values of GenParamName for the
Deep 3D Matching model Deep3DMatchingModel in GenParamValue.
The following table gives an overview, which parameters can be set using set_deep_matching_3d_param
and which can be retrieved using get_deep_matching_3d_param.

GenParamName set get


’camera_parameter N’ x x
’camera_pose N’ x x
’delete_cameras’ x
’dl_model_detection’ x x
’dl_model_pose_estimation’ x x
’min_num_views’ x x
’min_score’ x x
’num_matches’ x x
’orig_3d_model’ x

In the following the parameters are described:

’camera_parameter N’, ’camera_pose N’, ’delete_cameras’:


These parameters control the camera setup used for matching, i.e., the camera poses and the camera parame-
ters. ’delete_cameras’ can be set with an empty tuple as value to delete all cameras from a Deep 3D Matching
model.
The keys for accessing the parameters of the N’th camera are ’camera_parameter N’ and ’camera_pose N’,
where N is the zero-based index of the camera. For example, to access the parameters of the first camera, use
’camera_parameter 0’ and ’camera_pose 0’. Note that cameras must be added in order.
Further note that the camera parameters should not contain any distortion. It is rec-
ommended to remove any distortion from the camera parameters and images before-
hand, using, for example, change_radial_distortion_cam_par in combination with
change_radial_distortion_image or gen_radial_distortion_map and map_image.
The camera pose is the pose of the camera in an arbitrary world coordinate system. The poses of detected
objects are returned in that world coordinate system. The angles must be passed in radians.

HALCON/HDevelop Reference Manual, 2024-11-13


3.3. DEEP 3D MATCHING 93

’dl_model_detection’, ’dl_model_pose_estimation’:
The deep learning models used for Deep 3D Matching. Both models are already pre-trained
for the target object. They can be obtained and written back in order to, optimize it using
optimize_dl_model_for_inference or change the device on which they are executed.
’min_num_views’:
This parameter determines the minimum number of cameras in which an instance must be visible in order to
be returned by apply_deep_matching_3d. The parameter can be either an integer larger than zero, or
the string ’auto’. If ’auto’, instances must be visible in a single camera if only a single camera is used, and
in at least two cameras otherwise.
Suggested values: ’auto’, 2, 3
Default: ’auto’
Value range: ≥ 0 .
’min_score’:
This parameter determines the minimum score of detected instances. In other words,
apply_deep_matching_3d ignores all detected instances with a score smaller than this value.
The score computed by the Deep 3D Matching model lies between 0 and 1, where 0 indicates a bad match
and 1 is a very good match.
Value range: [0, . . . , 1]
Default: 0.2
’num_matches’:
This parameter determines the maximum number of matches to return by apply_deep_matching_3d.
If the operator finds more instances than set in ’num_matches’, only the ’num_matches’ instances with the
highest scores are returned. This parameter can be set to zero, in which case all instances above ’min_score’
are returned.
Value range: ≥ 0 .
Default: 0
’orig_3d_model’:
This parameter returns the original 3D CAD model used for creating the Deep 3D Matching model. It can be
used to, visualize detection results.

Attention
Deep 3D Matching requires images to not have too much of a distortion. It is recommended
to remove any distortion from the camera parameters and images beforehand, using, for example,
change_radial_distortion_cam_par in combination with change_radial_distortion_image
or gen_radial_distortion_map and map_image.
Parameters
. Deep3DMatchingModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep_matching_3d ; handle
Deep 3D Matching model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of parameter.
Default: ’min_score’
Suggested values: GenParamName ∈ {’min_score’, ’num_matches’, ’orig_3d_model’, ’min_num_views’,
’dl_model_detection’, ’dl_model_pose_estimation’, ’camera_parameter’, ’camera_pose’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer / handle
Obtained value of parameter.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Module
3D Metrology

HALCON 24.11.1.0
94 CHAPTER 3 3D MATCHING

read_deep_matching_3d ( : : FileName : Deep3DMatchingModel )

Read a Deep 3D Matching model from a file.


The operator read_deep_matching_3d reads a Deep 3D Matching model. Such models have to be in the
HALCON format. As a result, the handle Deep3DMatchingModel is returned.
The model is loaded from the file FileName. The default HALCON file extension for Deep 3D Matching models
is ’.dm3’.
Please note that the values of runtime specific parameters are not written to file, see
write_deep_matching_3d. As a consequence when reading a model these parameters are initialized
with their default value, see get_deep_matching_3d_param.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Filename
File extension: .dm3
. Deep3DMatchingModel (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep_matching_3d ; handle
Handle of the Deep 3D Matching model.
Result
If the parameters are valid, the operator read_deep_matching_3d returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
set_deep_matching_3d_param, get_deep_matching_3d_param, apply_deep_matching_3d
Module
3D Metrology

set_deep_matching_3d_param ( : : Deep3DMatchingModel,
GenParamName, GenParamValue : )

Set a parameter of a Deep 3D Matching model.


The operator set_deep_matching_3d_param sets the selected parameters GenParamName in the Deep
3D Matching model Deep3DMatchingModel to the values passed in GenParamValue.
The possible parameters are listed and described in get_deep_matching_3d_param.
Parameters

. Deep3DMatchingModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep_matching_3d ; handle


Deep 3D Matching model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of parameter.
Default: ’min_score’
Suggested values: GenParamName ∈ {’min_score’, ’num_matches’, ’min_num_views’,
’dl_model_detection’, ’dl_model_pose_estimation’, ’camera_parameter’, ’camera_pose’, ’delete_cameras’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer / handle
Value of parameter.
Default: 0.2

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 95

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Module
3D Metrology

write_deep_matching_3d ( : : Deep3DMatchingModel, FileName : )

Write a Deep 3D Matching model in a file.


write_deep_matching_3d writes the Deep 3D Matching model Deep3DMatchingModel to the file given
by FileName. Please note that the runtime specific parameters ’device’ and ’batch_size’ of the deep learning
models are not written.
The default HALCON file extension for Deep 3D Matching models is ’.dm3’.
The Deep 3D Matching model can be read with read_deep_matching_3d.
Parameters
. Deep3DMatchingModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep_matching_3d ; handle
Handle of the Deep 3D Matching model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Filename
File extension: .dm3
Result
If the parameters are valid, the operator write_deep_matching_3d returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
set_deep_matching_3d_param
Possible Successors
clear_handle
Module
3D Metrology

3.4 Deformable Surface-Based

add_deformable_surface_model_reference_point (
: : DeformableSurfaceModel, ReferencePointX, ReferencePointY,
ReferencePointZ : ReferencePointIndex )

Add a reference point to a deformable surface model.


The operator add_deformable_surface_model_reference_point adds one or more reference points
to the deformable surface model passed in DeformableSurfaceModel. The 3D coordinates of the reference
points is passed in the parameters ReferencePointX, ReferencePointY and ReferencePointZ. The
index of the new reference points is returned in ReferencePointIndex.

HALCON 24.11.1.0
96 CHAPTER 3 3D MATCHING

Reference points are defined in model coordinates, i.e., in the coordinate frame of the model parameter of
create_deformable_surface_model. The operators find_deformable_surface_model and
refine_deformable_surface_model return the position of all added reference points as found in the
scene.
Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
. ReferencePointX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
x-coordinates of a reference point.
. ReferencePointY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
x-coordinates of a reference point.
. ReferencePointZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
x-coordinates of a reference point.
. ReferencePointIndex (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Index of the new reference point.
Result
add_deformable_surface_model_reference_point returns 2 (H_MSG_TRUE) if all parameters are
correct. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator modifies the state of the following input parameter:
• DeformableSurfaceModel
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_deformable_surface_model, read_deformable_surface_model
Possible Successors
find_deformable_surface_model, refine_deformable_surface_model,
write_deformable_surface_model
See also
create_deformable_surface_model, find_deformable_surface_model,
refine_deformable_surface_model
Module
3D Metrology

add_deformable_surface_model_sample ( : : DeformableSurfaceModel,
ObjectModel3D : )

Add a sample deformation to a deformable surface model


The operator add_deformable_surface_model_sample adds the example deformation passed
in ObjectModel3D to the deformable surface model DeformableSurfaceModel. The point
cloud given in ObjectModel3D must have exactly as many points as the sampled deforma-
tion model, and is usually the result of the operator find_deformable_surface_model or
refine_deformable_surface_model. The deformable surface model must have been created before-
hand using, for example, create_deformable_surface_model. The operator re-trains the deformable
surface model including the passed deformation. This allows find_deformable_surface_model to find
deformations that are similar to the one given in ObjectModel3D.

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 97

Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the deformed 3D object model.
Result
add_deformable_surface_model_sample returns 2 (H_MSG_TRUE) if all parameters are correct. If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• DeformableSurfaceModel
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_deformable_surface_model, find_deformable_surface_model,
refine_deformable_surface_model
Possible Successors
find_deformable_surface_model, refine_deformable_surface_model,
get_deformable_surface_model_param, write_deformable_surface_model,
clear_deformable_surface_model
Alternatives
read_deformable_surface_model
See also
find_deformable_surface_model, refine_deformable_surface_model,
read_deformable_surface_model, create_deformable_surface_model,
write_deformable_surface_model, clear_deformable_surface_model
Module
3D Metrology

clear_deformable_surface_matching_result (
: : DeformableSurfaceMatchingResult : )

Free the memory of a deformable surface matching result.


The operator clear_deformable_surface_matching_result frees the
memory of a deformable surface matching result that was created with
find_deformable_surface_model or refine_deformable_surface_model. After calling
clear_deformable_surface_matching_result, the result can no longer be used. The handle
DeformableSurfaceMatchingResult becomes invalid.
Parameters
. DeformableSurfaceMatchingResult (input_control) . . . . . .
deformable_surface_matching_result(-array) ; handle
Handle of the deformable surface matching result.
Result
If the handle of the result is valid, the operator clear_deformable_surface_matching_result returns
the value 2 (H_MSG_TRUE). If necessary an exception is raised.
Execution Information

HALCON 24.11.1.0
98 CHAPTER 3 3D MATCHING

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• DeformableSurfaceMatchingResult
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_deformable_surface_model, refine_deformable_surface_model
See also
find_deformable_surface_model, refine_deformable_surface_model
Module
3D Metrology

clear_deformable_surface_model ( : : DeformableSurfaceModel : )

Free the memory of a deformable surface model.


The operator clear_deformable_surface_model frees the memory of a deformable sur-
face model that was created, for example, by read_deformable_surface_model or
create_deformable_surface_model. After calling clear_deformable_surface_model,
the model can no longer be used. The handle DeformableSurfaceModel becomes invalid.
Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . deformable_surface_model(-array) ; handle
Handle of the deformable surface model.
Result
If the handle of the model is valid, the operator clear_deformable_surface_model returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• DeformableSurfaceModel
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
read_deformable_surface_model, create_deformable_surface_model
See also
read_deformable_surface_model, create_deformable_surface_model
Module
3D Metrology

create_deformable_surface_model ( : : ObjectModel3D,
RelSamplingDistance, GenParamName,
GenParamValue : DeformableSurfaceModel )

Create the data structure needed to perform deformable surface-based matching.

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 99

The operator create_deformable_surface_model creates a model for deformable surface-based match-


ing for the 3D object stored in the 3D object model ObjectModel3D. The 3D object model can, for example,
have been read previously from a file by using read_object_model_3d or it can have been created by using
xyz_to_object_model_3d. The created surface model is returned in DeformableSurfaceModel.
The creation of the deformable surface model requires that the 3D object model contains points and normals. The
following combinations are possible:

• points and point normals, e.g., from a call to surface_normals_object_model_3d


• points and a triangular or polygon mesh, e.g., from a CAD file
• points and a 2D-Mapping, e.g., from an XYZ image triple converted with xyz_to_object_model_3d

Note that the direction and orientation (inward or outward) of the normals of the model are important for matching.
The deformable surface model is created by sampling the 3D object model with a certain distance. The sampling
distance must be specified in the parameter RelSamplingDistance and is parametrized relative to the di-
ameter of the axis-parallel bounding box of the 3D object model. For example, if RelSamplingDistance
is set to 0.05 and the diameter of ObjectModel3D is 10 cm, the points sampled from the object’s
surface will be approximately 5 mm apart. The sampled points can be obtained with the operator
get_deformable_surface_model_param using the value ’sampled_model’. Note that outlier points in
the object model should be avoided, as they would corrupt the diameter. Reducing RelSamplingDistance
leads to more points, and in turn to a more stable but slower matching. Increasing RelSamplingDistance
leads to less points, and in turn to a less stable but faster matching.

(1) (2) (3) (4)


(1) Original 3D model. (2) 3D model sampled with RelSamplingDistance = 0.02. (3) RelSamplingDistance
= 0.03. (4) RelSamplingDistance = 0.05.

By default, deformable surface models created with create_deformable_surface_model can handle a


moderate amount of deformation. The operator add_deformable_surface_model_sample can be used
to add additional training samples, thus expanding the range of possible deformations. The amount of deformation
that can be found can also be controlled with the generic parameters ’scale_min’, ’scale_max’ and ’bending_max’
(see below).
The generic parameter pair GenParamName and GenParamValue is used to set additional parameters
for the model generation. GenParamName contains the tuple of parameter names that shall be set and
GenParamValue contains the corresponding values. The following values are possible for GenParamName:

’model_invert_normals’: Invert the orientation of the surface normals of the model. The normal orientation needs
to be known for the model generation. If both the model and the scene are acquired with the same setup, the
normals will already point in the same direction. If the model was loaded from a CAD file, the normals might
point into the opposite direction. If you experience the effect that the model is found on the ’outside’ of the
scene surface and the model was created from a CAD file, try to set this parameter to ’true’. Also, make sure
that the normals in the CAD file all point either outward or inward, i.e., are oriented consistently.
List of values: ’false’, ’true’
Default: ’false’
’scale_min’ and ’scale_max’: The minimum and maximum allowed scaling of the model. Note that if you set one
of the two parameters, the other one must be set too.
Suggested values: 0.8, 1, 1.2
Default: No scaling
Restriction: 0 < ’scale_min’ < ’scale_max’

HALCON 24.11.1.0
100 CHAPTER 3 3D MATCHING

’bending_max’: Controls the maximum automatic deformation of the model. The model is deformed automati-
cally by bending it with an angle up to the value of ’bending_max’. This allows for deformations to be found
that are within this bending range. The angle is passed in degrees.
Suggested values: 5, 10, 30
Default: 20
Restriction: 0 <= ’bending_max’ < 90
’stiffness’: Control the stiffness of the model when performing the refinement. Larger values of this parameter
lead to a more stiff model that can be less deformed. Smaller values lead to a less stiff model that allows
more deformation.
Suggested values: 0.2, 0.5, 0.8
Default: 0.5
Restriction: 0 < ’stiffness’ <= 1

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Sampling distance relative to the object’s diameter
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.05, 0.03, 0.02, 0.01}
Restriction: 0 < RelSamplingDistance < 1
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters.
Default: []
Suggested values: GenParamName ∈ {’model_invert_normals’, ’scale_min’, ’scale_max’, ’bending_max’,
’stiffness’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’, 1, 0.9, 1.1, 5, 10, 20, 30, 0.05, 0.1, 0.2}
. DeformableSurfaceModel (output_control) . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
Result
create_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params
Possible Successors
add_deformable_surface_model_sample,
add_deformable_surface_model_reference_point, find_deformable_surface_model,
refine_deformable_surface_model, get_deformable_surface_model_param,
write_deformable_surface_model, clear_deformable_surface_model
Alternatives
read_deformable_surface_model
See also
find_deformable_surface_model, refine_deformable_surface_model,

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 101

read_deformable_surface_model, add_deformable_surface_model_sample,
add_deformable_surface_model_reference_point, write_deformable_surface_model,
clear_deformable_surface_model
References
Bertram Drost, Slobodan Ilic: “Graph-Based Deformable 3D Object Matching.” Proceedings of the 37th German
Conference on Pattern Recognition, pp. 222-233, 2015.
Module
3D Metrology

deserialize_deformable_surface_model (
: : SerializedItemHandle : DeformableSurfaceModel )

Deserialize a deformable surface model.


deserialize_deformable_surface_model deserializes a deformable surface model, that was serial-
ized by serialize_deformable_surface_model (see fwrite_serialized_item for an introduc-
tion of the basic principle of serialization). The serialized deformable surface model is defined by the handle
SerializedItemHandle. The deserialized values are stored in an automatically created deformable surface
model with the handle DeformableSurfaceModel.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. DeformableSurfaceModel (output_control) . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
Result
If the parameters are valid, the operator deserialize_deformable_surface_model returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item,
serialize_deformable_surface_model
Possible Successors
find_deformable_surface_model, refine_deformable_surface_model,
get_deformable_surface_model_param, clear_deformable_surface_model
Alternatives
create_deformable_surface_model
See also
create_deformable_surface_model, read_deformable_surface_model,
write_deformable_surface_model, serialize_deformable_surface_model
Module
3D Metrology

find_deformable_surface_model ( : : DeformableSurfaceModel,
ObjectModel3D, RelSamplingDistance, MinScore, GenParamName,
GenParamValue : Score, DeformableSurfaceMatchingResult )

Find the best match of a deformable surface model in a 3D scene.

HALCON 24.11.1.0
102 CHAPTER 3 3D MATCHING

The operator find_deformable_surface_model finds the best match of the deformable surface model
DeformableSurfaceModel in the 3D scene ObjectModel3D. The deformable surface model must have
been created previously with, for example, create_deformable_surface_model.
The matching requires that the 3D object model ObjectModel3D contains points and normals. The scene shall
provide one of the following options:

• points and point normals


• points and a 2D-Mapping, e.g., an XYZ image triple converted with xyz_to_object_model_3d

It is important for an accurate pose that the normals of the scene and the model point in the same direction (see
’scene_invert_normals’). Note that triangles or polygons in the passed scene are ignored. Instead, only the vertices
are used for matching. It is thus in general not recommended to use this operator on meshed scenes, such as
CAD data. Instead, such a scene must be sampled beforehand using sample_object_model_3d to create
points and normals. When using noisy point clouds, e.g., from time-of-flight cameras, the generic parameter
’scene_normal_computation’ should be set to ’mls’ in order to obtain more robust results (see below).
First, points are sampled uniformly from the scene passed in ObjectModel3D. The sampling distance is con-
trolled with the parameter RelSamplingDistance, and is given relative to the diameter of the surface model.
Decreasing RelSamplingDistance leads to more sampled points, and in turn to a more stable but slower
matching. Increasing RelSamplingDistance reduces the number of sampled scene points, which leads to a
less stable but faster matching. For an illustration showing different values for RelSamplingDistance, please
refer to the operator create_deformable_surface_model.
The operator get_deformable_surface_matching_result can be used to retrieve the sampled scene
points for visual inspection. For a robust matching it is recommended that at least 50-100 scene points are sampled
for each object instance.
The method first finds an approximate position of the object. This position is then refined. The generic parameters
controlling the deformation are described further down.
If a match was found, the score of the match is returned in Score and a deformable surface match-
ing result handle is returned in DeformableSurfaceMatchingResult. Details of the matching re-
sult, such as the deformed model and the position of the reference points, can be queried with the operator
get_deformable_surface_matching_result using the result handle.
The score is normalized between 0 and 1 and represents the amount of model surface visible in the scene. A value
of 1 represents a perfect match. The parameter MinScore can be used to filter the result. A match is returned
only if its score exceeds the value of MinScore.
The parameters GenParamName and GenParamValue are used to set generic parameters. Both get a tuple
of equal length, where the tuple passed to GenParamName contains the names of the parameters to set, and the
tuple passed to GenParamValue contains the corresponding values. The possible parameter names and values
are described below.

’scene_normal_computation’: This parameter controls the normal computation of the sampled scene. In the de-
fault mode ’fast’, normals are computed based on a small neighborhood of points. In the mode ’mls’, nor-
mals are computed based on a larger neighborhood and using the more complex but more accurate ’mls’
method. A more detailed description of the ’mls’ method can be found in the description of the operator
surface_normals_object_model_3d. The ’mls’ mode is intended for noisy data, such as images
from time-of-flight cameras.
List of values: ’fast’, ’mls’
Default: ’fast’
’scene_invert_normals’: Invert the orientation of the surface normals of the scene. The orientation of surface
normals of the scene have to match with the orientation of the model. If both the model and the scene are
acquired with the same setup, the normals will already point in the same direction. If you experience the
effect that the model is found on the ’outside’ of the scene surface, try to set this parameter to ’true’. Also,
make sure that the normals in the scene all point either outward or inward, i.e., are oriented consistently.
List of values: ’false’, ’true’
Default: ’false’
’pose_ref_num_steps’: Number of iterations for the refinement. Increasing the number of iteration leads to a more
accurate position at the expense of runtime. However, once convergence is reached, the accuracy can no
longer be increased, even if the number of steps is increased.

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 103

Suggested values: 1, 10, 25, 50


Default: 25
Restriction: ’pose_ref_num_steps’ > 0
’pose_ref_dist_threshold_rel’: Set the distance threshold for refinement relative to the diameter of the surface
model. Only scene points that are closer to the object than this distance are used for the optimization. Scene
points further away are ignored.
Suggested values: 0.05, 0.1, 0.25, 0.3
Default: 0.25
Restriction: 0 < ’pose_ref_dist_threshold_rel’
’pose_ref_dist_threshold_abs’: Set the distance threshold for dense pose refinement as absolute
value. See ’pose_ref_dist_threshold_rel’ for a detailed description. Only one of the parameters
’pose_ref_dist_threshold_rel’ and ’pose_ref_dist_threshold_abs’ can be set. If both are set, only the
value of the last parameter is used.
Restriction: 0 < ’pose_ref_dist_threshold_abs’
’pose_ref_scoring_dist_rel’: Set the distance threshold for scoring relative to the diameter of the surface model.
See the following ’pose_ref_scoring_dist_abs’ for a detailed description. Only one of the parameters
’pose_ref_scoring_dist_rel’ and ’pose_ref_scoring_dist_abs’ can be set. If both are set, only the value of
the last parameter is used.
Suggested values: 0.1, 0.05, 0.03, 0.005
Default: 0.03
Restriction: 0 < ’pose_ref_scoring_dist_rel’
’pose_ref_scoring_dist_abs’: Set the distance threshold for scoring. Only scene points that are closer to the object
than this distance are considered to be ’on the model’ when computing the score after the refinement. All
other scene points are considered not to be on the model. The value should correspond to the amount of
noise on the coordinates of the scene points. Only one of the parameters ’pose_ref_scoring_dist_rel’ and
’pose_ref_scoring_dist_abs’ can be set. If both are set, only the value of the last parameter is used.
Restriction: 0 < ’pose_ref_scoring_dist_abs’

Parameters

. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle


Handle of the deformable surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scene sampling distance relative to the diameter of the surface model.
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.07, 0.05, 0.04, 0.03}
Restriction: 0 < RelSamplingDistance < 1
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Minimum score of the returned match.
Default: 0
Restriction: MinScore >= 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’scene_normal_computation’, ’scene_invert_normals’,
’pose_ref_num_steps’, ’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’,
’pose_ref_scoring_dist_abs’, ’pose_ref_scoring_dist_rel’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’fast’, ’mls’, 0, 1, 10, 25, 50, 0.05, 0.1, 0.25, 0.3, 0.05, 0.03, 0.005}
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Score of the found instance of the surface model.

HALCON 24.11.1.0
104 CHAPTER 3 3D MATCHING

. DeformableSurfaceMatchingResult (output_control) . . . . . .
deformable_surface_matching_result(-array) ; handle
Handle of the matching result.
Result
find_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_deformable_surface_model, create_deformable_surface_model,
get_deformable_surface_model_param,
add_deformable_surface_model_reference_point,
add_deformable_surface_model_sample
Possible Successors
refine_deformable_surface_model, get_deformable_surface_matching_result,
clear_deformable_surface_matching_result, clear_object_model_3d
Alternatives
refine_deformable_surface_model
See also
refine_deformable_surface_model
Module
3D Metrology

get_deformable_surface_matching_result (
: : DeformableSurfaceMatchingResult, ResultName,
ResultIndex : ResultValue )

Get details of a result from deformable surface based matching.


The operator get_deformable_surface_matching_result returns details about the re-
sults of deformable surface based matching or the deformable surface refinement. The re-
sults are stored in DeformableSurfaceMatchingResult, which must have been created by
find_deformable_surface_model or refine_deformable_surface_model.
The parameter ResultName is used to select which result detail shall be returned. For some result de-
tails, ResultIndex selects the index of the result detail. ResultIndex is ignored for certain values of
ResultName.
The following values are possible for ResultName:

’sampled_scene’: A 3D object model handle is returned that contains the sampled scene points that were
used in the matching or refinement. This is helpful for tuning the sampling distance of the
scene (see parameter RelSamplingDistance of operators find_deformable_surface_model and
refine_deformable_surface_model). The parameter ResultIndex is ignored.
’rigid_pose’: If DeformableSurfaceMatchingResult was created by
find_deformable_surface_model, a rigid pose is returned that approximates
the deformable matching result. The parameter ResultIndex is ignored. This pa-
rameter is not available if DeformableSurfaceMatchingResult was created by
refine_deformable_surface_model.

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 105

’reference_point_x’:
’reference_point_y’:
’reference_point_z’: Returns the x-, y- or z-coordinates of a transformed reference point. The
reference point must have been added to the deformable surface model using the operator
add_deformable_surface_model_reference_point. The indices of the reference points to be
returned are passed in ResultIndex. If ’all’ is passed in ResultIndex, the position of all reference
points is returned.
’deformed_model’: Returns a deformed variant of the 3D object model that was originally passed to
create_deformable_surface_model. The 3D object model is deformed with the reconstructed
deformation. Triangles, polygons and extended attributes contained in the original 3D object model are
maintained. The parameter ResultIndex is ignored.
’deformed_sampled_model’: Returns a deformed variant of the 3D object model that was sampled by
create_deformable_surface_model. The returned 3D object model has the same number of points
as the original, undeformed sampled model, and the points are in the same order. Details about the sampling
are described in create_deformable_surface_model. The original, undeformed sampled model
can be obtained with get_deformable_surface_model_param. The parameter ResultIndex is
ignored.

Parameters
. DeformableSurfaceMatchingResult (input_control) . . . . . . deformable_surface_matching_result
; handle
Handle of the deformable surface matching result.
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the result property.
Default: ’sampled_scene’
List of values: ResultName ∈ {’sampled_scene’, ’rigid_pose’, ’reference_point_x’, ’reference_point_y’,
’reference_point_z’, ’deformed_model’, ’deformed_sampled_model’}
. ResultIndex (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the result property.
Default: 0
Suggested values: ResultIndex ∈ {0, 1, 2, 3, ’all’}
Restriction: ResultIndex >= 0
. ResultValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string / real / handle
Value of the result property.
Result
If the handle of the result is valid, the operator get_deformable_surface_matching_result returns
the value 2 (H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
find_deformable_surface_model, refine_deformable_surface_model
Possible Successors
clear_deformable_surface_model
See also
find_deformable_surface_model, refine_deformable_surface_model,
read_deformable_surface_model, write_deformable_surface_model,
clear_deformable_surface_model
Module
3D Metrology

HALCON 24.11.1.0
106 CHAPTER 3 3D MATCHING

get_deformable_surface_model_param ( : : DeformableSurfaceModel,
GenParamName : GenParamValue )

Return the parameters and properties of a deformable surface model.


The operator get_deformable_surface_model_param returns parameters and properties of the sur-
face model DeformableSurfaceModel. The surface model must have been created with, for example,
create_deformable_surface_model.
The following values are possible for GenParamName:

’diameter’: Diameter of the model point cloud. The diameter is the length of the diagonal of the axis-parallel
bounding box.
’sampled_model’: The 3D points sampled from the model for matching. This returns a 3D object model that
contains all points sampled from the model surface for matching.
’training_models’: This returns all 3D object models that were used for the training of the de-
formable surface model. This includes the 3D object model passed to and sampled
by create_deformable_surface_model, and the 3D object models added with
add_deformable_surface_model_sample.
’reference_points_x’:
’reference_points_y’:
’reference_points_z’: Returns the x-, y- or z-coordinates of all reference points added with the operator
add_deformable_surface_model_reference_point.

Parameters

. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle


Handle of the deformable surface model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of the parameter.
Default: ’sampled_model’
List of values: GenParamName ∈ {’diameter’, ’sampled_model’, ’sampled_pose_refinement’,
’training_models’, ’reference_points_x’, ’reference_points_y’, ’reference_points_z’, ’original_model’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / string / integer
Value of the parameter.
Result
get_deformable_surface_model_param returns 2 (H_MSG_TRUE) if all parameters are correct. If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_deformable_surface_model, read_deformable_surface_model,
add_deformable_surface_model_reference_point
Possible Successors
find_deformable_surface_model, refine_deformable_surface_model,
write_deformable_surface_model
See also
create_deformable_surface_model
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 107

read_deformable_surface_model (
: : FileName : DeformableSurfaceModel )

Read a deformable surface model from a file.


The operator read_deformable_surface_model reads the deformable surface model, which has been writ-
ten with write_deformable_surface_model, from the file FileName. The handle of the deformable
surface model is returned in DeformableSurfaceModel. If no absolute path is given in FileName, the
file is searched in the current directory of the HALCON process. The default HALCON file extension for the
deformable surface model file is ’dsfm’. If no file named FileName exists, the default file extension is appended
to FileName.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of the file to read.
File extension: .dsfm
. DeformableSurfaceModel (output_control) . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the read deformable surface model.
Result
read_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct and the file can
be read. If the file is not a deformable surface model file, the error 9506 is raised. If the file has a version that can
not be read by this version of HALCON, the error 9507 is raised. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
write_deformable_surface_model
Possible Successors
find_deformable_surface_model, refine_deformable_surface_model,
get_deformable_surface_model_param, clear_deformable_surface_model
Alternatives
create_deformable_surface_model
See also
create_deformable_surface_model, write_deformable_surface_model
Module
3D Metrology

refine_deformable_surface_model ( : : DeformableSurfaceModel,
ObjectModel3D, RelSamplingDistance, InitialDeformationObjectModel3D,
GenParamName, GenParamValue : Score,
DeformableSurfaceMatchingResult )

Refine the position and deformation of a deformable surface model in a 3D scene.


The operator refine_deformable_surface_model refines the initial position and deformation given in
InitialDeformationObjectModel3D of the surface model DeformableSurfaceModel in the 3D
scene ObjectModel3D. The deformable surface model DeformableSurfaceModel must have been created
previously with, for example, create_deformable_surface_model.
refine_deformable_surface_model is useful if the position and deformation of an object in a scene is
approximately known and only needs to be refined. Additional information about the output parameters can be
found in find_deformable_surface_model.

HALCON 24.11.1.0
108 CHAPTER 3 3D MATCHING

InitialDeformationObjectModel3D must contain as many points as the sampled deformation model


obtained by get_deformable_surface_model_param and in the same order.
The score of the refined result is returned in Score and a deformable surface matching result handle is returned in
DeformableSurfaceMatchingResult. Details of the result, such as the deformed model and the position
of the reference points, can be queried with the operator get_deformable_surface_matching_result
using the result handle.
The score is normalized between 0 and 1 and represents the amount of model surface visible in the scene. A value
of 1 represents a perfect match.
The parameters GenParamName and GenParamValue are used to set generic parameters. Details about the
generic parameters are described in the documentation of find_deformable_surface_model.
Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Relative sampling distance of the scene.
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.07, 0.05, 0.04, 0.03}
Restriction: 0 < RelSamplingDistance < 1
. InitialDeformationObjectModel3D (input_control) . . . . . . . . . . . . . . . . . object_model_3d ; handle
Initial deformation of the 3D object model
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’scene_normal_computation’, ’pose_ref_num_steps’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_abs’,
’pose_ref_scoring_dist_rel’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’fast’, ’mls’, 0, 1, 10, 25, 50, 0.05, 0.1, 0.25, 0.3, 0.05, 0.03, 0.005}
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Score of the refined model.
. DeformableSurfaceMatchingResult (output_control) . . . . . .
deformable_surface_matching_result(-array) ; handle
Handle of the matching result.
Result
refine_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_deformable_surface_model, create_deformable_surface_model,
get_deformable_surface_model_param, find_deformable_surface_model
Possible Successors
get_deformable_surface_matching_result,
clear_deformable_surface_matching_result, clear_object_model_3d

HALCON/HDevelop Reference Manual, 2024-11-13


3.4. DEFORMABLE SURFACE-BASED 109

Alternatives
find_deformable_surface_model
See also
create_deformable_surface_model, find_deformable_surface_model
Module
3D Metrology

serialize_deformable_surface_model (
: : DeformableSurfaceModel : SerializedItemHandle )

Serialize a deformable surface_model.


serialize_deformable_surface_model serializes the data of a deformable surface model (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data
that is written in a file by write_deformable_surface_model is converted to a serialized item.
The deformable surface model is defined by the handle DeformableSurfaceModel. The serialized de-
formable surface model is returned by the handle SerializedItemHandle and can be deserialized by
deserialize_deformable_surface_model.
Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_deformable_surface_model returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_deformable_surface_model, create_deformable_surface_model
Possible Successors
clear_deformable_surface_model, fwrite_serialized_item, send_serialized_item,
deserialize_deformable_surface_model
See also
create_deformable_surface_model, read_deformable_surface_model,
write_deformable_surface_model, deserialize_deformable_surface_model
Module
3D Metrology

write_deformable_surface_model ( : : DeformableSurfaceModel,
FileName : )

Write a deformable surface model to a file.


The operator write_deformable_surface_model writes a deformable surface model to the file
FileName. The file can be read again with read_deformable_surface_model. The default HALCON
file extension for the deformable surface model file is ’dsfm’.

HALCON 24.11.1.0
110 CHAPTER 3 3D MATCHING

Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model to write.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name to write to.
File extension: .dsfm
Result
write_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct and the HAL-
CON process has write permission to the file. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_deformable_surface_model, create_deformable_surface_model,
get_deformable_surface_model_param
Possible Successors
clear_deformable_surface_model
See also
create_deformable_surface_model, read_deformable_surface_model
Module
3D Metrology

3.5 Shape-Based

clear_shape_model_3d ( : : ShapeModel3DID : )

Free the memory of a 3D shape model.


The operator clear_shape_model_3d frees the memory of a 3D shape model that was created by
create_shape_model_3d. After calling clear_shape_model_3d, the model can no longer be used.
The handle ShapeModel3DID becomes invalid.
Parameters

. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d(-array) ; handle


Handle of the 3D shape model.
Result
If the handle of the model is valid, the operator clear_shape_model_3d returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• ShapeModel3DID

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 111

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, write_shape_model_3d
Module
3D Metrology

create_cam_pose_look_at_point ( : : CamPosX, CamPosY, CamPosZ,


LookAtX, LookAtY, LookAtZ, RefPlaneNormal, CamRoll : CamPose )

Create a 3D camera pose from camera center and viewing direction.


The operator create_cam_pose_look_at_point creates a 3D camera pose with respect to a world coordi-
nate system based on two points and the camera roll angle.
The first of the two points defines the position of the optical center of the camera in the world coordinate system,
i.e., the origin of the camera coordinate system. It is given by its three coordinates CamPosX, CamPosY, and
CamPosZ. The second of the two points defines the viewing direction of the camera. It represents the point in the
world coordinate system at which the camera is to look. It is also specified by its three coordinates LookAtX,
LookAtY, and LookAtZ. Consequently, the second point lies on the z axis of the camera coordinate system.
Finally, the remaining degree of freedom to be specified is a rotation of the camera around its z axis, i.e.,
the roll angle of the camera. To determine this rotation, the normal of a reference plane can be specified in
RefPlaneNormal, which defines the reference orientation of the camera. Finally, the camera roll angle can
be specified in CamRoll, which describes a rotation of the camera around its z axis with respect to its reference
orientation.
The reference plane can be seen as a plane in the world coordinate system that is parallel to the x axis of the
camera (in its reference orientation, i.e., with a roll angle of 0). In an alternative interpretation, the normal vector
of the reference plane projected onto the image plane points upwards, i.e., it is mapped to the negative y axis of the
camera coordinate system. The parameter RefPlaneNormal may take one of the following values:

’x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world coordi-
nate system points upwards in the image plane.
’-x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world
coordinate system points downwards in the image plane.
’y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world coordi-
nate system points upwards in the image plane.
’-y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world
coordinate system points downwards in the image plane.
’z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world coordi-
nate system points upwards in the image plane.
’-z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world
coordinate system points downwards in the image plane.

Alternatively to the above values, an arbitrary normal vector can be specified in RefPlaneNormal, which is not
restricted to the coordinate axes. For this, a tuple of three values representing the three components of the normal
vector must be passed.
Note that the position of the optical center and the point at which the camera looks must differ from each other.
Furthermore, the normal vector of the reference plane and the z axis of the camera must not be parallel. Otherwise,
the camera pose is not well-defined.
create_cam_pose_look_at_point is particularly useful if a 3D object model or a 3D shape
model should be visualized from a certain camera position. In this case, the pose that is cre-
ated by create_cam_pose_look_at_point can be passed to project_object_model_3d or
project_shape_model_3d, respectively.
It is also possible to pass tuples of different length for different input parameters. In this case, internally the
maximum number of parameter values over all input control parameters is computed. This number is taken as

HALCON 24.11.1.0
112 CHAPTER 3 3D MATCHING

the number of output camera poses. Then, all input parameters can contain a single value or the same number of
values as output camera poses. In the first case, the single value is used for the computation of all camera poses,
while in the second case the respective value of the element in the parameter is used for the computation of the
corresponding camera pose.
Parameters
. CamPosX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the optical center of the camera.
. CamPosY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the optical center of the camera.
. CamPosZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the optical center of the camera.
. LookAtX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the 3D point to which the camera is directed.
. LookAtY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the 3D point to which the camera is directed.
. LookAtZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the 3D point to which the camera is directed.
. RefPlaneNormal (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string / real
Normal vector of the reference plane (points up).
Default: ’-y’
List of values: RefPlaneNormal ∈ {’x’, ’y’, ’z’, ’-x’, ’-y’, ’-z’}
. CamRoll (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Camera roll angle.
Default: 0
. CamPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D camera pose.
Result
If the parameters are valid, the operator create_cam_pose_look_at_point returns the value 2
(H_MSG_TRUE). If necessary an exception is raised. If the parameters are chosen such that the pose is not well
defined, the error 8940 is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
convert_point_3d_spher_to_cart
Alternatives
create_pose
Module
3D Metrology

create_shape_model_3d ( : : ObjectModel3D, CamParam, RefRotX,


RefRotY, RefRotZ, OrderOfRotation, LongitudeMin, LongitudeMax,
LatitudeMin, LatitudeMax, CamRollMin, CamRollMax, DistMin,
DistMax, MinContrast, GenParamName,
GenParamValue : ShapeModel3DID )

Prepare a 3D object model for matching.


The operator create_shape_model_3d prepares a 3D object model, which is passed in ObjectModel3D,
as a 3D shape model used for matching. The 3D object model must previously been read from a file by using
read_object_model_3d.

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 113

The 3D shape model is generated by computing different views of the 3D object model within a user-specified
pose range. The views are automatically generated by placing virtual cameras around the 3D object model and
projecting the 3D object model into the image plane of each virtual camera position. For each such obtained view a
2D shape representation is computed. Thus, for the generation of the 3D shape model, no images of the object are
used but only the 3D object model, which is passed in ObjectModel3D. The shape representations of all views
are stored in the 3D shape model, which is returned in ShapeModel3DID. During the matching process with
find_shape_model_3d, the shape representations are used to find out the best-matching view, from which
the pose is subsequently refined and returned.
In order to create the model views correctly, the camera parameters of the camera that will be used for the
matching must be passed in CamParam. The camera parameters are necessary, for example, to determine
the scale of the projections by using the actual focal length of the camera. Furthermore, they are used to
treat radial distortions of the lens correctly. Consequently, it is essential to calibrate the camera by using
calibrate_cameras before creating the 3D shape model. On the one hand, this is necessary to obtain ac-
curate poses from find_shape_model_3d. On the other hand, this makes the 3D matching applicable even
when using lenses with significant radial distortions.
The pose range within which the model views are generated can be specified by the parameters RefRotX,
RefRotY, RefRotZ, OrderOfRotation, LongitudeMin, LongitudeMax, LatitudeMin,
LatitudeMax, CamRollMin, CamRollMax, DistMin, and DistMax. Note that the model will
only be recognized during the matching if it appears within the specified pose range. The parameters are described
in the following:
Before computing the views, the origin of the coordinate system of the 3D object model is moved to the refer-
ence point of the 3D object model, which is the center of the smallest enclosing axis-parallel cuboid and can be
queried by using get_object_model_3d_params. The virtual cameras, which are used to create the views,
are arranged around the 3D object model in such a way that they all look at the origin of the coordinate system,
i.e., the z axes of the cameras pass through the origin. The pose range can then be specified by restricting the
views to a certain quadrilateral on the sphere around the origin. This naturally leads to the use of the spheri-
cal coordinates longitude, latitude, and radius. The definition of the spherical coordinate system is chosen such
that the equatorial plane corresponds to the xz plane of the Cartesian coordinate system with the y axis point-
ing to the south pole (negative latitude) and the negative z axis pointing in the direction of the zero meridian
(see convert_point_3d_spher_to_cart or convert_point_3d_cart_to_spher for further de-
tails about the conversion between Cartesian and spherical coordinates). The advantage of this definition is that a
camera with the pose [0,0,z,0,0,0,0] has its optical center at longitude=0, latitude=0, and radius=z. In this case, the
radius represents the distance of the optical center of the camera to the reference point of the 3D object model.
The longitude range, for which views are to be generated, can be specified by LongitudeMin and
LongitudeMax, both given in radians. Accordingly, the latitude range can be specified by LatitudeMin
and LatitudeMax, also given in radians. LongitudeMin and LongitudeMax are adjusted to maintain a
range of 360° (2π). If an adjustment is possible, LongitudeMin and the range are preserved. The minimum
and maximum distance between the camera center and the model reference point is specified by DistMin and
DistMax. Thereby, the model origin is in the center of the smallest enclosing cuboid and does not necessarily
coincide with the origin of the CAD coordinate system. Note that the unit of the distance must be meters (assuming
that the parameter Scale has been correctly set when reading the CAD file with read_object_model_3d).
Finally, the minimum and the maximum camera roll angle can be specified in CamRollMin and CamRollMax.
This interval specifies the allowable camera rotation around its z axis with respect to the 3D object model. If the
image plane is parallel to the plane on which the objects reside and if it is known that the object may rotate in this
plane only in a restricted range, then it is reasonable to specify this range in CamRollMin and CamRollMax.
In all other cases the interpretation of the camera roll angle is difficult, and hence, it is recommended to set this
interval to [−π, +π]. Note that the larger the specified pose range is chosen the more memory the model will
consume (except from the range of the camera roll angle) and the slower the matching will be.
The orientation of the coordinate system of the 3D object model is defined by the coordinates within the CAD
file that was read by using read_object_model_3d. Therefore, it is reasonable to previously rotate the 3D
object model into a reference orientation such that the view that corresponds to longitude=0 and latitude=0 is ap-
proximately at the center of the pose range. This can be achieved by passing appropriate values for the reference
orientation in RefRotX, RefRotY, RefRotZ, and OrderOfRotation. The rotation is performed around the
axes of the 3D object model, which origin was set to the reference point. The longitude and latitude range can then
be interpreted as a variation of the 3D object model pose around the reference orientation. There are two possible
ways to specify the reference orientation. The first possibility is to specify three rotation angles in RefRotX,
RefRotY, and RefRotZ and the order in which the three rotations are to be applied in OrderOfRotation,
which can either be ’gba’ or ’abg’. The second possibility is to specify the three components of the Rodriguez

HALCON 24.11.1.0
114 CHAPTER 3 3D MATCHING

rotation vector in RefRotX, RefRotY, and RefRotZ. In this case, OrderOfRotation must be set to ’ro-
driguez’ (see create_pose for detailed information about the order of the rotations and the definition of the
Rodriguez vector).
Thus, two transformations are applied to the 3D object model before computing the model views within the pose
range. The first transformation is the translation of the origin of the coordinate systems to the reference point. The
second transformation is the rotation of the 3D object model to the desired reference orientation around the axes
of the reference coordinate system. By combining both transformations one obtains the reference pose of the 3D
shape model. The reference pose of the 3D shape model thus describes the pose of the reference coordinate system
with respect to the coordinate system of the 3D object model defined by the CAD file. Let t = (x, y, z)0 be the
coordinates of the reference point of the 3D object model and R be the rotation matrix containing the reference
orientation. Then, a point pm given in the 3D object model coordinate system can be transformed to a point pr in
the reference coordinate system of the 3D shape model by applying the following formula:
pr = R · (pm − t)
This transformation can be expressed by a homogeneous 3D transformation matrix or alternatively in terms of a
3D pose. The latter can be queried by passing ’reference_pose’ for the parameter GenParamName of the operator
get_shape_model_3d_params. The above formula can be best imagined as a pose of pose type 8, 10, or 12,
depending on the value that was chosen for OrderOfRotation (see create_pose for detailed information
about the different pose types). Note, however, that get_shape_model_3d_params always returns the pose
using the pose type 0. Finally, poses that are given in one of the two coordinate systems can be transformed to the
other coordinate system by using trans_pose_shape_model_3d.
Furthermore, it should be noted that the reference coordinate system is introduced only to specify the pose range
in a convenient way. The pose resulting from the 3D matching that is performed with find_shape_model_3d
always refers to the original 3D object model coordinate system used in the CAD file.
With MinContrast, it can be determined which edge contrast the model must at least have in the recognition
performed by find_shape_model_3d. In other words, this parameter separates the model from the noise in
the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the search images, the noise in one channel must be multiplied by the square root
of the number of channels to determine MinContrast. If, for example, the gray values fluctuate within a range
of 10 gray levels in a single channel and the image is a three-channel image, MinContrast should be set to 17.
If the model should be recognized in very low contrast images, MinContrast must be set to a correspondingly
small value. If the model should be recognized even if it is severely occluded, MinContrast should be slightly
larger than the range of gray value fluctuations created by noise in order to ensure that the pose of the model is
extracted robustly and accurately by find_shape_model_3d.
The parameters described above are application-dependent and must be always specified when creating a 3D
shape model. In addition, there are some generic parameters that can optionally be used to influence the model
creation. For most applications these parameters need not to be specified but can be left at their default val-
ues. If desired, these parameters and their corresponding values can be specified by using GenParamName and
GenParamValue, respectively. The following values for GenParamName are possible:

’num_levels’: For efficiency reasons the model views are generated on multiple pyramid levels. On higher levels
fewer views are generated than on lower levels. With the parameter ’num_levels’ the number of pyramid
levels on which model views are generated can be specified. It should be chosen as large as possible because
by this the time necessary to find the model is significantly reduced. On the other hand, the number of levels
must be chosen such that the shape representations of the views on the highest pyramid level are still recog-
nizable and contain a sufficient number of points (at least four). If not enough model points are generated for
a certain view, the view is deleted from the model and replaced by a view on a lower pyramid level. If for all
views on a pyramid level not enough model points are generated, the number of levels is reduced internally
until for at least one view enough model points are found on the highest pyramid level. If this procedure
would lead to a model with no pyramid levels, i.e., if the number of model points is too small for all views al-
ready on the lowest pyramid level, create_shape_model_3d returns an error message. If ’num_levels’
is set to ’auto’ (default value), create_shape_model_3d determines the number of pyramid levels au-
tomatically. In this case all model views on all pyramid levels are automatically checked whether their shape
representations are still recognizable. If the shape representation of a certain view is found to be not recog-
nizable, the view is deleted from the model and replaced by a view on a lower pyramid level. Note that if
’num_levels’ is set to ’auto’, the number of pyramid levels can be different for different views. In rare cases,
it might happen that create_shape_model_3d determines a value for the number of pyramid levels that

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 115

is too large or too small. If the number of pyramid levels is chosen too large, the model may not be recog-
nized in the image or it may be necessary to select very low parameters for MinScore or Greediness in
find_shape_model_3d in order to find the model. If the number of pyramid levels is chosen too small,
the time required to find the model in find_shape_model_3d may increase. In these cases, the views
on the pyramid levels should be checked by using the output of get_shape_model_3d_contours.
Suggested values: ’auto’, 3, 4, 5, 6
Default: ’auto’
’fast_pose_refinement’: The parameter specifies whether the pose refinement during the search with
find_shape_model_3d is sped up. If ’fast_pose_refinement’ is set to ’false’, for complex models with a
large number of faces the pose refinement step might amount to a significant part of the overall computation
time. If ’fast_pose_refinement’ is set to ’true’, some of the calculations that are necessary during the pose
refinement are already performed during the model generation and stored in the model. Consequently, the
pose refinement during the search will be faster. Please note, however, that in this case the memory con-
sumption of the model may increase significantly (typically by less than 30 percent). Further note that the
resulting poses that are returned by find_shape_model_3d might slightly differ depending on the value
of ’fast_pose_refinement’, because internally the pose refinement is approximated if the parameter is set to
’true’.
List of values: ’true’, ’false’
Default: ’true’
’lowest_model_level’: In some cases the model generation process might be very time consuming and the memory
consumption of the model might be very high. The reason for this is that in these cases the number of views,
which must be computed and stored in the model, is very high. The larger the pose range is chosen and
the larger the objects appear in the image (measured in pixels) the more views are necessary. Consequently,
especially the use of large images (e.g., images exceeding a size of 640 × 480) can result in very large mod-
els. Because the number of views is highest on lower pyramid levels, the parameter ’lowest_model_level’
can be used to exclude the lower pyramid levels from the generation of views. The value that is passed for
’lowest_model_level’ determines the lowest pyramid level down to which views are generated and stored
in the 3d shape model. If, for example, a value of 2 is passed for large models, the time to generate the
model as well as the size of the resulting model is reduced to approximately one third of the original values.
If ’lowest_model_level’ is not passed, views are generated for all pyramid levels, which corresponds to the
behavior when passing a value of 1 for ’lowest_model_level’. If for ’lowest_model_level’ a value larger than
1 is passed, in find_shape_model_3d the tracking of matches through the pyramid will be stopped at
this level. However, if in find_shape_model_3d a least-squares adjustment is chosen for pose refine-
ment, the matches are refined on the lowest pyramid level using the least-squares adjustment. Note that for
different values for ’lowest_model_level’ different matches might be found during the search. Furthermore,
the score of the matches depends on the chosen method for pose refinement. Also note that the higher ’low-
est_model_level’ is chosen the higher the portion of the refinement step with respect to the overall run-time of
find_shape_model_3d will be. As a consequence for higher values of ’lowest_model_level’ the influ-
ence of the generic parameter ’fast_pose_refinement’ (see above) on the runtime will increase. A large value
for ’lowest_model_level’ on the one hand may lead to long computation times of find_shape_model_3d
if ’fast_pose_refinement’ is switches off (’false’). On the other hand it may lead to a decreased accuracy if
’fast_pose_refinement’ is switches on (’true’) because in this mode the pose refinement is only approxi-
mated. Therefore, the value for ’lowest_model_level’ should be chosen as small as possible. Furthermore,
’lowest_model_level’ should be chosen small enough such that the edges of the 3D object model are still
observable on this level.
Suggested values: 1, 2, 3
Default: 1
’optimization’: For models with particularly large model views, it may be useful to reduce the number of model
points by setting ’optimization’ to a value different from ’none’. If ’optimization’ = ’none’, all model points
are stored. In all other cases, the number of points is reduced according to the value of ’optimization’.
If the number of points is reduced, it may be necessary in find_shape_model_3d to set the parame-
ter Greediness to a smaller value, e.g., 0.7 or 0.8. For models with small model views, the reduction
of the number of model points does not result in a speed-up of the search because in this case usually
significantly more potential instances of the model must be examined. If ’optimization’ is set to ’auto’,
create_shape_model_3d automatically determines the reduction of the number of model points for
each model view.
List of values: ’auto’, ’none’, ’point_reduction_low’, ’point_reduction_medium’, ’point_reduction_high’
Default: ’auto’

HALCON 24.11.1.0
116 CHAPTER 3 3D MATCHING

’metric’: This parameter determines the conditions under which the model is recognized in the image. If ’metric’
= ’ignore_part_polarity’, the contrast polarity is allowed to change only between different parts of the model,
whereas the polarity of model points that are within the same model part must not change. Please note that
the term ’ignore_part_polarity’ is capable of being misunderstood. It means that polarity changes between
neighboring model parts do not influence the score, and hence are ignored. Appropriate model parts are
automatically determined. The size of the parts can be controlled by the generic parameter ’part_size’, which
is described below. Note that this metric only works for one-channel images. Consequently, if the model
is created by using this metric and searched in a multi-channel image by using find_shape_model_3d
an error will be returned. If ’metric’ = ’ignore_local_polarity’, the model is found even if the contrast
polarity changes for each individual model point. This metric works for one-channel images as well as
for multi-channel images. The metric ’ignore_part_polarity’ should be used if the images contain strongly
textured backgrounds or clutter objects, which might result in wrong matches. Note that in general the scores
of the matches that are returned by find_shape_model_3d are lower for ’ignore_part_polarity’ than
for ’ignore_local_polarity’. This should be kept in mind when choosing the right value for the parameter
MinScore of find_shape_model_3d.
List of values: ’ignore_local_polarity’, ’ignore_part_polarity’
Default: ’ignore_local_polarity’
’part_size’: This parameter determines the size of the model parts that is used when ’metric’ is set to ’ig-
nore_part_polarity’ (see above). The size must be specified in pixels and should be approximately twice
as large as the size of the background texture in the image. For example, if an object should be found in front
of a chessboard with black and white squares of size 5 × 5 pixels, ’part_size’ should be set to 10. Note that
higher values of ’part_size’ might also decrease the scores of correct instances especially when searching for
objects with shiny or reflective surfaces. Therefore, the risk of missing correct instances might increase if
’part_size’ is set to a higher value. If ’metric’ is set to ’ignore_local_polarity’, the value of ’part_size’ is
ignored.
Suggested values: 2, 3, 4, 6, 8, 10
Default: 4
’min_face_angle’: 3D edges are only included in the shape representations of the views if the angle between
the two 3D faces that are incident with the 3D object model edge is at least ’min_face_angle’. If
’min_face_angle’ is set to 0.0, all edges are included. If ’min_face_angle’ is set to π (equivalent to 180
degrees), only the silhouette of the 3D object model is included. This parameter can be used to suppress
edges within curved surfaces, e.g., the surface of a cylinder or cone. Curved surfaces are approximated by
multiple planar faces. The edges between such neighboring planar faces should not be included in the shape
representation because they also do not appear in real images of the model. Thus, ’min_face_angle’ should
be set sufficiently high to suppress these edges. The effect of different values for ’min_face_angle’ can be
inspected by using project_object_model_3d before calling create_shape_model_3d. Note
that if edges that are not visible in the search image are included in the shape representation, the performance
(robustness and speed) of the matching may decrease considerably.
Suggested values: ’rad(10)’, ’rad(20)’, ’rad(30)’, ’rad(45)’
Default: ’rad(30)’
’min_size’: This value determines a threshold for the selection of significant model components based on the size
of the components, i.e., connected components that have fewer points than the specified minimum size are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level.
Suggested values: ’auto’, 0, 3, 5, 10, 20
Default: ’auto’
’model_tolerance’: The parameter specifies the tolerance of the projected 3D object model edges in the image,
given in pixels. The higher the value is chosen, the fewer views need to be generated. Consequently, a higher
value results in models that are less memory consuming and faster to find with find_shape_model_3d.
On the other hand, if the value is chosen too high, the robustness of the matching will decrease. Therefore,
this parameter should only be modified with care. For most applications, a good compromise between speed
and robustness is obtained when setting ’model_tolerance’ to 1.
Suggested values: 0, 1, 2
Default: 1
’union_adjacent_contours’: This parameter specifies if adjacent projected contours should be joined by
the operator project_shape_model_3d or not. Activating this option is equivalent to calling
union_adjacent_contours_xld afterwards, but significantly faster.

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 117

List of values: ’true’, ’false’


Default: ’false’

If the system variable (see set_system) ’opengl_hidden_surface_removal_enable’ is set to ’true’ (which is


default if it is available) the graphics card is used to accelerate the computation of the visible faces in the
model views. Depending on the graphics card this is significantly faster than the analytic visibility computa-
tion. If ’fast_pose_refinement’ is set to ’true’, the precomputations necessary for the pose refinement step in
find_shape_model_3d are also performed on the graphics card. Be aware that the results of the OpenGL
projection are slightly different compared to the analytic projection.
Parameters

. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle


Handle of the 3D object model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. RefRotX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or without
unit).
Default: 0
Suggested values: RefRotX ∈ {-1.57, -0.78, -0.17, 0., 0.17, 0.78, 1.57}
. RefRotY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or without
unit).
Default: 0
Suggested values: RefRotY ∈ {-1.57, -0.78, -0.17, 0., 0.17, 0.78, 1.57}
. RefRotZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or without
unit).
Default: 0
Suggested values: RefRotZ ∈ {-1.57, -0.78, -0.17, 0., 0.17, 0.78, 1.57}
. OrderOfRotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Meaning of the rotation values of the reference orientation.
Default: ’gba’
List of values: OrderOfRotation ∈ {’gba’, ’abg’, ’rodriguez’}
. LongitudeMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Minimum longitude of the model views.
Default: -0.35
Suggested values: LongitudeMin ∈ {-0.78, -0.35, -0.17}
. LongitudeMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Maximum longitude of the model views.
Default: 0.35
Suggested values: LongitudeMax ∈ {0.17, 0.35, 0.78}
Restriction: LongitudeMax >= LongitudeMin
. LatitudeMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Minimum latitude of the model views.
Default: -0.35
Suggested values: LatitudeMin ∈ {-0.78, -0.35, -0.17}
Restriction: - pi / 2 <= LatitudeMin && LatitudeMin <= pi / 2
. LatitudeMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Maximum latitude of the model views.
Default: 0.35
Suggested values: LatitudeMax ∈ {0.17, 0.35, 0.78}
Restriction: - pi / 2 <= LatitudeMax && LatitudeMax <= pi / 2 && LatitudeMax >= LatitudeMin
. CamRollMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real
Minimum camera roll angle of the model views.
Default: -3.1416
Suggested values: CamRollMin ∈ {-3.14, -1.57, -0.39, 0.0, 0.39, 1.57, 3.14}

HALCON 24.11.1.0
118 CHAPTER 3 3D MATCHING

. CamRollMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real


Maximum camera roll angle of the model views.
Default: 3.1416
Suggested values: CamRollMax ∈ {-3.14, -1.57, -0.39, 0.0, 0.39, 1.57, 3.14}
Restriction: CamRollMax >= CamRollMin
. DistMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum camera-object-distance of the model views.
Default: 0.3
Suggested values: DistMin ∈ {0.05, 0.1, 0.2, 0.5}
Restriction: DistMin > 0
. DistMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Maximum camera-object-distance of the model views.
Default: 0.4
Suggested values: DistMax ∈ {0.1, 0.2, 0.5, 1.0}
Restriction: DistMax >= DistMin
. MinContrast (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Minimum contrast of the objects in the search images.
Default: 10
Suggested values: MinContrast ∈ {1, 2, 3, 5, 7, 10, 20, 30, 1000, 2000, 5000}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of (optional) parameters for controlling the behavior of the operator.
Default: []
List of values: GenParamName ∈ {’num_levels’, ’fast_pose_refinement’, ’lowest_model_level’,
’optimization’, ’metric’, ’part_size’, ’min_face_angle’, ’min_size’, ’model_tolerance’,
’union_adjacent_contours’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; integer / real / string
Values of the optional generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, 2, 3, 4, 6, 8, 10, ’auto’, ’none’, ’point_reduction_low’,
’point_reduction_medium’, ’point_reduction_high’, 0.1, 0.2, 0.3, ’ignore_local_polarity’,
’ignore_part_polarity’, ’true’, ’false’}
. ShapeModel3DID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
Result
If the parameters are valid, the operator create_shape_model_3d returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised. If the parameters are chosen such that all model views contain too few points, the
error 8510 is raised. In the case that the projected model is bigger than twice the image size in at least one model
view, the error 8910 is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, project_object_model_3d, get_object_model_3d_params
Possible Successors
find_shape_model_3d, write_shape_model_3d, project_shape_model_3d,
get_shape_model_3d_params, get_shape_model_3d_contours
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
References
Markus Ulrich, Christian Wiedemann, Carsten Steger, “Combining Scale-Space and Similarity-Based Aspect

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 119

Graphs for Fast 3D Object Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp.
1902-1914, Oct., 2012.
Module
3D Metrology

deserialize_shape_model_3d (
: : SerializedItemHandle : ShapeModel3DID )

Deserialize a serialized 3D shape model.


deserialize_shape_model_3d deserializes a 3D shape model, that was serialized by
serialize_shape_model_3d (see fwrite_serialized_item for an introduction of the basic princi-
ple of serialization). The serialized 3D shape model is defined by the handle SerializedItemHandle. The
deserialized values are stored in an automatically created 3D shape model with the handle ShapeModel3DID.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. ShapeModel3DID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
Result
If the parameters are valid, the operator deserialize_shape_model_3d returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_shape_model_3d
Possible Successors
find_shape_model_3d, get_shape_model_3d_params
See also
create_shape_model_3d, clear_shape_model_3d
Module
3D Metrology

find_shape_model_3d ( Image : : ShapeModel3DID, MinScore,


Greediness, NumLevels, GenParamName, GenParamValue : Pose,
CovPose, Score )

Find the best matches of a 3D shape model in an image.


The operator find_shape_model_3d finds the best matches of the 3D shape model ShapeModel3DID in the
input Image. The 3D shape model must have been created previously by calling create_shape_model_3d
or read_shape_model_3d.
The 3D pose of the found instances of the model is returned in Pose. The pose is in the form ccs Pmcs , where
ccs denotes the camera coordinate system and mcs the model coordinate system (which is a 3D world coordinate
system), see Transformations / Poses and “Solution Guide III-C - 3D Vision”. Hence, it describes
the pose of the 3D object model in camera coordinates. It should be noted that the resulting Pose does not refer
to reference coordinate system that is introduced in create_shape_model_3d but to the original 3D object
model coordinate system used in the CAD file. If a pose refinement was applied (see below), additionally the
accuracy of the six pose parameters are returned in CovPose. By default, CovPose contains the 6 standard

HALCON 24.11.1.0
120 CHAPTER 3 3D MATCHING

deviations of the pose parameters for each match. In contrast, if the generic parameter ’cov_pose_mode’ (see
below) was set to ’covariances’, CovPose contains the 36 values of the complete 6 × 6 covariance matrix of the 6
pose parameters. Note that this reflects only an inner accuracy from which the real accuracy of the pose may differ.
Finally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which is
an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
Input parameters in detail

Image and its domain: The domain of the image Image determines the search space for the reference point of
the 3D object model. There is no need to correct any distortions in Image as the calibration data has already
been provided during the model creation.
MinScore: The parameter MinScore determines what score a potential match must at least have to be regarded
as an instance of the model in the image. The larger MinScore is chosen, the faster the search is. If the
model can be expected never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9.
Note that in images with a high degree of clutter or strong background texture, MinScore should be set to
a value not much lower than 0.7 since otherwise false matches could be found.
Greediness: The parameter Greediness determines how “greedily” the search should be carried out. If
Greediness = 0, a safe search heuristic is used, which always finds the model if it is visible in the image.
However, the search will be relatively time consuming in this case. If Greediness = 1, an unsafe search
heuristic is used, which may cause the model not to be found in rare cases, even though it is visible in the
image. For Greediness = 1, the maximum search speed is achieved. In almost all cases, the 3D shape
model will always be found for Greediness = 0.9.
NumLevels: The number of pyramid levels used during the search is determined with NumLevels. If nec-
essary, the number of levels is clipped to the range given when the 3D shape model was created with
create_shape_model_3d. If NumLevels is set to 0, the number of pyramid levels specified in
create_shape_model_3d is used. Optionally, NumLevels can contain a second value that determines
the lowest pyramid level to which the found matches are tracked. Hence, a value of [4,2] for NumLevels
means that the matching starts at the fourth pyramid level and tracks the matches to the second lowest pyra-
mid level (the lowest pyramid level is denoted by a value of 1). This mechanism can be used to decrease
the runtime of the matching. If the lowest pyramid level to use is chosen too large, it may happen that the
desired accuracy cannot be achieved, or that wrong instances of the model are found because the model is
not specific enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the
model. In this case, the lowest pyramid level to use must be set to a smaller value.
GenParamName and GenParamValue: In addition to the parameters described above, there are some generic
parameters that can optionally be used to influence the matching. For most applications these parameters need
not to be specified but can be left at their default values. If desired, these parameters and their corresponding
values can be specified by using GenParamName and GenParamValue, respectively. The following
values for GenParamName are possible:
• If the pose range in which the model is to be searched is smaller than the pose range that was specified
during the model creation with create_shape_model_3d, the pose range can be restricted appro-
priately with the following parameters. If the values lie outside the pose range of the model, the values
are automatically clipped to the pose range of the model.
’longitude_min’: Sets the minimum longitude of the pose range.
Suggested values: ’rad(-45)’, ’rad(-30)’, ’rad(-15)’
Default: ’rad(-180)’
’longitude_max’: Sets the maximum longitude of the pose range.
Suggested values: ’rad(15)’, ’rad(30)’, ’rad(45)’
Default: ’rad(180)’
’latitude_min’: Sets the minimum latitude of the pose range.
Suggested values: ’rad(-45)’, ’rad(-30)’, ’rad(-15)’
Default: ’rad(-90)’
’latitude_max’: Sets the maximum latitude of the pose range.
Suggested values: ’rad(15)’, ’rad(30)’, ’rad(45)’
Default: ’rad(90)’
’cam_roll_min’: Sets the minimum camera roll angle of the pose range.
Suggested values: ’rad(-45)’, ’rad(-30)’, ’rad(-15)’
Default: ’rad(-180)’

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 121

’cam_roll_max’: Sets the maximum camera roll angle of the pose range.
Suggested values: ’rad(15)’, ’rad(30)’, ’rad(45)’
Default: ’rad(180)’
’dist_min’: Sets the minimum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default: 0
’dist_max’: Sets the maximum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default: (∞)
• Further generic parameters that do not concern the pose range can be specified:
’num_matches’: With this parameter the maximum number of instances to be found can be determined.
If more than the specified number of instances with a score greater than MinScore are found in the
image, only the best ’num_matches’ instances are returned. If fewer than ’num_matches’ are found,
only that number is returned, i.e., the parameter MinScore takes precedence over ’num_matches’.
If ’num_matches’ is set to 0, all matches that satisfy the score criterion are returned. Note that the
more matches should be found the slower the matching will be.
Suggested values: 0, 1, 2, 3
Default: 1
’max_overlap’: It may happen that multiple instances with similar positions but with different orien-
tations are found in the image. The parameter ’max_overlap’ determines by what fraction (i.e., a
number between 0 and 1) two instances may at most overlap in order to consider them as different
instances, and hence to be returned separately. If two instances overlap each other by more than
the specified value only the best instance is returned. The calculation of the overlap is based on the
smallest enclosing rectangle of arbitrary orientation (see smallest_rectangle2) of the found
instances. If in create_shape_model_3d for ’lowest_model_level’ a value larger than 1 was
passed, the overlap calculation is based on the projection of the smallest enclosing axis-parallel
cuboid of the 3D object model. Because in this case the overlap might be overestimated, in some
cases it could be necessary to increase the value for ’max_overlap’. If 0 max _overlap 0 = 0, the
found instances may not overlap at all, while for 0 max _overlap 0 = 1 all instances are returned.
Suggested values: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0
Default: 0.5
’pose_refinement’: This parameter determines whether the poses of the instances should be refined af-
ter the matching. If ’pose_refinement’ is set to ’none’ the model’s pose is only determined with a
limited accuracy. In this case, the accuracy depends on several sampling steps that are used inside
the matching process and, therefore cannot be predicted very well. Therefore, ’pose_refinement’
should only be set to ’none’ when the computation time is of primary concern and an approxi-
mate pose is sufficient. In all other cases the pose should be determined through a least-squares
adjustment, i.e., by minimizing the distances of the model points to their corresponding image
points. In order to achieve a high accuracy, this refinement is directly performed in 3D. Therefore,
the refinement requires additional computation time. If the system variable (see set_system)
’opengl_hidden_surface_removal_enable’ is set to ’true’ (which is default if it is available) and the
model ShapeModel3DID was created with ’fast_pose_refinement’ set to ’false’, the projection of
the model in the pose refinement step is accelerated using the graphics card. Depending on the graph-
ics card this is significantly faster than the non accelerated algorithm. Be aware that the results of the
OpenGL projection are slightly different compared to the analytic projection. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’)
can be used to determine the accuracy with which the minimum distance is searched for. The higher
the accuracy is chosen, the longer the pose refinement will take, however. For most applications
’least_squares_high’ should be chosen because this results in the best trade-off between runtime
and accuracy. Note that the pose refinement can be sped up by passing ’fast_pose_refinement’ for
the parameter GenParamName of the operator create_shape_model_3d.
List of values: ’none’, ’least_squares’, ’least_squares_high’, ’least_squares_very_high’
Default: ’least_squares_high’
’recompute_score’: This parameter determines whether the score of the matches is recomputed after
the pose refinement. If ’recompute_score’ is set to ’false’, the score is returned that was computed
before the pose refinement. In some cases, however, the pose refinement changes the object pose by
more than one pixel in the image. Consequently, the original score does not appropriately describe
the refined match any longer. This could result in wrong matches obtaining high scores or perfect

HALCON 24.11.1.0
122 CHAPTER 3 3D MATCHING

matches obtaining low scores. To obtain a more meaningful score that reflects the pose changes due
to the pose refinement, the score can be recomputed after the pose refinement by setting ’recom-
pute_score’ to ’true’. Note that this might change the order of the matches as well as the selection
of matches that is returned. Also note that the recomputation of the score values needs additional
computation time. This increase of the run-time can be reduced by setting the generic parameter
’fast_pose_refinement’ of the operator create_shape_model_3d to ’true’.
List of values: ’false’, ’true’
Default: ’false’
’outlier_suppression’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. Then, in some cases it might be useful
to apply a robust outlier suppression during the least-squares adjustment. This might be necessary,
for example, if a high degree of clutter is present in the image, which prevents the least-squares
adjustment from finding the optimum pose. In this case, ’outlier_suppression’ should be set to
either ’medium’ (eliminates a medium proportion of outliers) or ’high’ (eliminates a high proportion
of outliers). However, in most applications, no robust outlier suppression is necessary, and hence,
’outlier_suppression’ can be set to ’none’. It should be noted that activating the outlier suppression
comes along with a significantly increasing computation time.
List of values: ’none’, ’medium’, ’high’
Default: ’none’
’cov_pose_mode’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. ’cov_pose_mode’ determines the mode
in which the accuracies that are computed during the least-squares adjustment are returned in
CovPose. If ’cov_pose_mode’ is set to ’standard_deviations’, the 6 standard deviations of the
6 pose parameters are returned for each match. In contrast, if ’cov_pose_mode’ is set to ’covari-
ances’, CovPose contains the 36 values of the complete 6 × 6 covariance matrix of the 6 pose
parameters.
List of values: ’standard_deviations’, ’covariances’
Default: ’standard_deviations’
’border_model’: The model is searched within those points of the domain of the image in which the
model lies completely within the image. This means that the model will not be found if it extends
beyond the borders of the image, even if it would achieve a score greater than MinScore. Note
that, if for a certain pyramid level the model touches the image border, it might not be found even
if it lies completely within the original image. As a rule of thumb, the model might not be found if
its distance to an image border falls below 2N umLevels−1 . This behavior can be changed by setting
’border_model’ to ’true’, which will cause models that extend beyond the image border to be found
if they achieve a score greater than MinScore. Here, points lying outside the image are regarded
as being occluded, i.e., they lower the score. It should be noted that the runtime of the search
will increase in this mode. Note further, that in rare cases, which occur typically only for artificial
images, the model might not be found also if for certain pyramid levels the model touches the border
of the reduced domain. Then, it may help to enlarge the reduced domain by 2N umLevels−1 using,
e.g., dilation_circle.
List of values: ’false’, ’true’
Default: ’false’

Parameters

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : byte / uint2


Input image in which the model should be found.
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Minimum score of the instances of the model to be found.
Default: 0.7
Suggested values: MinScore ∈ {0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Value range: 0 ≤ MinScore ≤ 1
Minimum increment: 0.01
Recommended increment: 0.05

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 123

. Greediness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real


“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default: 0.9
Suggested values: Greediness ∈ {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
Value range: 0 ≤ Greediness ≤ 1
Minimum increment: 0.01
Recommended increment: 0.05
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of pyramid levels used in the matching (and lowest pyramid level to use if |NumLevels| = 2).
Default: 0
List of values: NumLevels ∈ {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of (optional) parameters for controlling the behavior of the operator.
Default: []
List of values: GenParamName ∈ {’longitude_min’, ’longitude_max’, ’latitude_min’, ’latitude_max’,
’cam_roll_min’, ’cam_roll_max’, ’dist_min’, ’dist_max’, ’num_matches’, ’max_overlap’, ’pose_refinement’,
’cov_pose_mode’, ’outlier_suppression’, ’border_model’, ’recompute_score’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; integer / real / string
Values of the optional generic parameters.
Default: []
Suggested values: GenParamValue ∈ {-0.78, -0.35, -0.17, 0.0, 0.17, 0.35, 0.78, 0.1, 0.2, 0.5, ’none’,
’false’, ’true’, ’least_squares’, ’least_squares_high’, ’least_squares_very_high’, ’standard_deviations’,
’covariances’, ’medium’, ’high’}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D pose of the 3D shape model.
. CovPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
6 standard deviations or 36 covariances of the pose parameters.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Score of the found instances of the 3D shape model.
Example

read_object_model_3d (DXFModelFileName, 'm', [], [], ObjectModel3D, \


DxfStatus)
CamParam := ['area_scan_division',0.01221,2791,7.3958e-6,7.4e-6,\
308.21,245.92,640,480]
create_shape_model_3d (ObjectModel3D, CamParam, 0, 0, 0, 'gba', \
-rad(20), rad(20), -rad(20), rad(20), 0, \
rad(360), 0.15, 0.2, 10, [], [], ShapeModel3DID)
grab_image_async (Image, AcqHandle, -1)
find_shape_model_3d (Image, ShapeModel3DID, 0.6, 0.9, 0, [], [], \
Pose, CovPose, Score)
project_shape_model_3d (ModelContours, ShapeModel3DID, CamParam, \
Pose, 'true', rad(30))

Result
If the parameter values are correct, the operator find_shape_model_3d returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised. If the model was created with
find_shape_model_3d by setting ’metric’ to ’ignore_part_polarity’ and a multi-channel input image is
passed in Image, the error 3359 is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

HALCON 24.11.1.0
124 CHAPTER 3 3D MATCHING

Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
project_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology

get_shape_model_3d_contours ( : ModelContours : ShapeModel3DID,


Level, View : ViewPose )

Return the contour representation of a 3D shape model view.


The operator get_shape_model_3d_contours returns a representation of a single model view of the 3D
shape model ShapeModel3DID as XLD contours in ModelContours. The parameters Level and View
determine for which model view the contour representation should be returned, where Level denotes the pyramid
level and View denotes the model view on this pyramid level.
The permitted range of values for Level and View can previously be determined by using the operator
get_shape_model_3d_params and passing ’num_views_per_level’ for GenParamName.
The contours can be used to visualize and rate the 3D shape model that was created with
create_shape_model_3d. With this it is possible, for example, to decide whether the number of pyra-
mid levels in the model is appropriate or not. If the contours on the highest pyramid do not show enough de-
tails to be representative for the model view, the number of pyramid levels that are used during the search with
find_shape_model_3d should be adjusted downwards. In contrast, if the contours show too many details
even on the highest pyramid level, a higher number of pyramid levels should be chosen already during the creation
of the 3D shape model by using create_shape_model_3d.
Additionally, the pose of the selected view is returned in ViewPose. It can be used, for example, to project the
3D shape model according to the view pose by using project_shape_model_3d. The rating of the model
contours that was described above can then be performed by comparing the ModelContours to the projected
model. Note that the position of the contours of the projection and the position of the model contours may slightly
differ because of radial distortions.
Parameters
. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; object
Contour representation of the model view.
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. Level (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Pyramid level for which the contour representation should be returned.
Default: 1
Suggested values: Level ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction: Level >= 1
. View (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
View for which the contour representation should be returned.
Default: 1
Suggested values: View ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction: View >= 1
. ViewPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the 3D shape model at the current view.
Result
If the parameters are valid, the operator get_shape_model_3d_contours returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 125

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, get_shape_model_3d_params
Possible Successors
create_shape_model_3d
Module
3D Metrology

get_shape_model_3d_params ( : : ShapeModel3DID,
GenParamName : GenParamValue )

Return the parameters of a 3D shape model.


The operator get_shape_model_3d_params allows to query parameters of the 3D shape model. The names
of the desired parameters are passed in the generic parameter GenParamName, the corresponding values are
returned in GenParamValue.
The following parameters can be queried:

’cam_param’: Internal parameters of the camera that is used for the matching.
’ref_rot_x’: Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or
without unit).
’ref_rot_y’: Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or
without unit).
’ref_rot_z’: Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or
without unit).
’order_of_rotation’: Meaning of the rotation values of the reference orientation.
’longitude_min’: Minimum longitude of the model views.
’longitude_max’: Maximum longitude of the model views.
’latitude_min’: Minimum latitude of the model views.
’latitude_max’: Maximum latitude of the model views.
’cam_roll_min’: Minimum camera roll angle of the model views.
’cam_roll_max’: Maximum camera roll angle of the model views.
’dist_min’: Minimum camera-object-distance of the model views.
’dist_max’: Maximum camera-object-distance of the model views.
’min_contrast’: Minimum contrast of the objects in the search images.
’num_levels’: User-specified number of pyramid levels.
’num_levels_max’: Maximum number of used pyramid levels over all model views.
’optimization’: Kind of optimization by reducing the number of model points.
’metric’: Match metric.
’part_size’: Size of the model parts that is used when ’metric’ is set to ’ignore_part_polarity’.
’min_face_angle’: Minimum 3D face angle for which 3D object model edges are included in the 3D shape model.
’min_size’: Minimum size of the projected 3D object model edge (in number of pixels) to include the projected
edge in the 3D shape model.
’model_tolerance’: Maximum acceptable tolerance of the projected 3D object model edges (in pixels).
’num_views_per_level’: Number of model views per pyramid level. For each pyramid level the number of views
that are stored in the 3D shape model are returned. Thus, the number of returned elements corresponds to the
number of used pyramid levels, which can be queried with ’num_levels_max’. Note that for pyramid levels
below ’lowest_model_level’ (see documentation of create_shape_model_3d), the value 0 is returned.

HALCON 24.11.1.0
126 CHAPTER 3 3D MATCHING

’reference_pose’: Reference position and orientation of the 3d shape model. The returned pose is in the form
rcs
Pmcs , where rcs denotes the reference coordinates system and mcs the model coordinate system (which
is a 3D world coordinate system), see Transformations / Poses and “Solution Guide III-C - 3D
Vision”. Hence, it describes the pose of the coordinate system that is used in the underlying 3D object
model relative to the internally used reference coordinate system of the 3D shape model. With this pose,
points given in the object coordinate system can be transformed into the reference coordinate system.
’reference_point’: 3D coordinates of the reference point of the underlying 3D object model.
’bounding_box1’: Smallest enclosing axis-parallel cuboid of the underlying 3D object model in the following
order: [min_x, min_y, min_z, max_x, max_y, max_z].
’fast_pose_refinement’: Describes whether the pose refinement during the search is performed in a sped up mode
(’true’) or in the conventional mode (’false’).
’lowest_model_level’: Lowest pyramid level down to which views are stored in the model.
’union_adjacent_contours’: Describes whether in project_shape_model_3d adjacent contours should be
joined or not.

A detailed description of the parameters can be looked up with the operator create_shape_model_3d.
It is possible to query the values of several parameters with a single operator call by passing a tuple containing the
names of all desired parameters to GenParamName. As a result a tuple of the same length with the corresponding
values is returned in GenParamValue. Note that this is solely possible for parameters that return only a single
value.
Parameters
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters that are to be queried for the 3D shape model.
Default: ’num_levels_max’
List of values: GenParamName ∈ {’cam_param’, ’ref_rot_x’, ’ref_rot_y’, ’ref_rot_z’, ’order_of_rotation’,
’longitude_min’, ’longitude_max’, ’latitude_min’, ’latitude_max’, ’cam_roll_min’, ’cam_roll_max’,
’dist_min’, ’dist_max’, ’min_contrast’, ’num_levels’, ’num_levels_max’, ’optimization’, ’metric’, ’part_size’,
’min_face_angle’, ’min_size’, ’model_tolerance’, ’num_views_per_level’, ’reference_pose’,
’reference_point’, ’bounding_box1’, ’fast_pose_refinement’, ’lowest_model_level’,
’union_adjacent_contours’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string / integer / real
Values of the generic parameters.
Result
If the parameters are valid, the operator get_shape_model_3d_params returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
find_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 127

project_shape_model_3d ( : ModelContours : ShapeModel3DID,


CamParam, Pose, HiddenSurfaceRemoval, MinFaceAngle : )

Project the edges of a 3D shape model into image coordinates.


The operator project_shape_model_3d projects the edges of the 3D object model that was used to create
the 3D shape model ShapeModel3DID into the image coordinate system and returns the projected edges in
ModelContours. The coordinates of the 3D object model are given in the 3D world coordinate system (mcs).
First, they are transformed into the camera coordinate system (ccs) using the external camera parameters given
in Pose. Then, these coordinates are projected into the image coordinate system based on the internal camera
parameters CamParam.
The internal camera parameters CamParam describe the projection characteristics of the camera (see Calibra-
tion). The Pose is in the form ccs Pmcs , see Transformations / Poses and “Solution Guide III-C - 3D
Vision”. Hence, it describes the position and orientation of the model coordinate system defined by the 3D
object model relative to the camera coordinate system.
The parameter HiddenSurfaceRemoval can be used to switch on or to switch off the removal of hidden
surfaces. If HiddenSurfaceRemoval is set to ’true’, only those projected edges are returned that are not
hidden by faces of the 3D object model. If HiddenSurfaceRemoval is set to ’false’, all projected edges are
returned. This is faster than a projection with HiddenSurfaceRemoval set to ’true’.
If the system variable (see set_system) ’opengl_hidden_surface_removal_enable’ is set to ’true’ (which is de-
fault if it is available) and HiddenSurfaceRemoval is set to ’true’, the projection of the model is accelerated
using the graphics card. Depending on the graphics card this is significantly faster than the non accelerated algo-
rithm. Be aware that the results of the OpenGL projection are slightly different compared to the analytic projection.
Notable, only the contours visible through CamParam are projected in this mode.
3D edges are only projected if the angle between the two 3D faces that are incident with the 3D edge is at least
MinFaceAngle. If MinFaceAngle is set to 0.0, all edges are projected. If MinFaceAngle is set to π
(equivalent to 180 degrees), only the silhouette of the 3D object model is returned. This parameter can be used to
suppress edges within curved surfaces, e.g., the surface of a cylinder.
If for the model creation with create_shape_model_3d the parameter ’union_adjacent_contours’ was acti-
vated, adjacent contours are joined.
project_shape_model_3d and project_object_model_3d return the same result if the 3D object
model that was used to create the 3D shape model is passed to project_object_model_3d.
project_shape_model_3d is especially useful in order to visualize the matches that are returned by
find_shape_model_3d in the case that the underlying 3D object model is no longer available.
Parameters
. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; object
Contour representation of the model view.
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the 3D shape model in the world coordinate system.
. HiddenSurfaceRemoval (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Remove hidden surfaces?
Default: ’true’
List of values: HiddenSurfaceRemoval ∈ {’true’, ’false’}
. MinFaceAngle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Smallest face angle for which the edge is displayed
Default: 0.523599
Suggested values: MinFaceAngle ∈ {0.17, 0.26, 0.35, 0.52}
Result
If the parameters are valid, the operator project_shape_model_3d returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

HALCON 24.11.1.0
128 CHAPTER 3 3D MATCHING

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
create_shape_model_3d, read_shape_model_3d, get_shape_model_3d_params,
find_shape_model_3d
Alternatives
project_object_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology

read_shape_model_3d ( : : FileName : ShapeModel3DID )

Read a 3D shape model from a file.


The operator read_shape_model_3d reads a 3D shape model, which has been written with
write_shape_model_3d, from the file FileName. The default HALCON file extension for the 3D shape
model is ’sm3’.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
File extension: .sm3
. ShapeModel3DID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
Result
If the file name is valid, the operator read_shape_model_3d returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
find_shape_model_3d, get_shape_model_3d_params
See also
create_shape_model_3d, clear_shape_model_3d
Module
3D Metrology

serialize_shape_model_3d (
: : ShapeModel3DID : SerializedItemHandle )

Serialize a 3D shape model.

HALCON/HDevelop Reference Manual, 2024-11-13


3.5. SHAPE-BASED 129

serialize_shape_model_3d serializes the data of a 3D shape model (see fwrite_serialized_item


for an introduction of the basic principle of serialization). The same data that is written in a file by
write_shape_model_3d is converted to a serialized item. The 3D shape model is defined by the handle
ShapeModel3DID. The serialized 3D shape model is returned by the handle SerializedItemHandle and
can be deserialized by deserialize_shape_model_3d.
Parameters
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_shape_model_3d returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_shape_model_3d
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_shape_model_3d
Module
3D Metrology

trans_pose_shape_model_3d ( : : ShapeModel3DID, PoseIn,


Transformation : PoseOut )

Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference
coordinate system of a 3D shape model and vice versa.
The operator trans_pose_shape_model_3d transforms the pose PoseIn into the pose PoseOut by using
the transformation direction specified in Transformation. In the majority of cases, the operator will be used
to transform a camera pose that is given relative to the source coordinate system to a camera pose that refers to the
target coordinate system.
The pose can be transformed between two coordinate systems. The first coordinate system is the reference co-
ordinate system of the 3D shape model (ref ) that is passed in ShapeModel3DID. The origin of the reference
coordinate system lies at the reference point of the underlying 3D object model. The orientation of the reference
coordinate system is determined by the reference orientation that was specified when creating the 3D shape model
with create_shape_model_3d.
The second coordinate system is the world coordinate system, i.e., the coordinate system of the 3D object model
(mcs) that underlies the 3D shape model. This coordinate system is implicitly determined by the coordinates that
are stored in the CAD file that was read by using read_object_model_3d.
If Transformation is set to ’ref_to_model’, it is assumed that PoseIn refers to the reference coordinate
system of the 3D shape model. Thus, PoseIn is cs Prcs , where cs denotes the coordinate system the input pose
transforms into (e.g., the camera coordinate system). For further information we refer to Transformations / Poses
and “Solution Guide III-C - 3D Vision”. The resulting output pose PoseOut in this case refers to
the coordinate system of the 3D object model, thus cs Pmcs .
If Transformation is set to ’model_to_ref’, it is assumed that PoseIn refers to the coordinate system of
the 3D object model, cs Pmcs . The resulting output pose PoseOut in this case refers to the reference coordinate
system of the 3D shape model, thus cs Prcs .
The relative pose of the two coordinate systems can be queried by passing ’reference_pose’ for GenParamName
in the operator get_shape_model_3d_params.

HALCON 24.11.1.0
130 CHAPTER 3 3D MATCHING

Parameters
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. PoseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose to be transformed in the source system.
. Transformation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Direction of the transformation.
Default: ’ref_to_model’
List of values: Transformation ∈ {’ref_to_model’, ’model_to_ref’}
. PoseOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Transformed 3D pose in the target system.
Result
If the parameters are valid, the operator trans_pose_shape_model_3d returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
find_shape_model_3d
Alternatives
hom_mat3d_translate, hom_mat3d_rotate
Module
3D Metrology

write_shape_model_3d ( : : ShapeModel3DID, FileName : )

Write a 3D shape model to a file.


The operator write_shape_model_3d writes a 3D shape model to the file FileName. The model can be read
again with read_shape_model_3d. The default HALCON file extension for the 3D shape model is ’sm3’.
Parameters
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .sm3
Result
If the file name is valid (write permission), the operator write_shape_model_3d returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_shape_model_3d
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 131

3.6 Surface-Based

clear_surface_matching_result ( : : SurfaceMatchingResultID : )

Free the memory of a surface matching result.


The operator clear_surface_matching_result frees the memory of a surface matching re-
sult that was created by find_surface_model or refine_surface_model_pose. After
calling clear_surface_matching_result, the result can no longer be used. The handle
SurfaceMatchingResultID becomes invalid.
Parameters
. SurfaceMatchingResultID (input_control) . . . . . . . . . . . . . . surface_matching_result(-array) ; handle
Handle of the surface matching result.
Result
If the handle of the result is valid, the operator clear_surface_matching_result returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

• SurfaceMatchingResultID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_surface_model, refine_surface_model_pose
See also
find_surface_model, refine_surface_model_pose
Module
3D Metrology

clear_surface_model ( : : SurfaceModelID : )

Free the memory of a surface model.


The operator clear_surface_model frees the memory of a surface model that was created by
read_surface_model or create_surface_model. After calling clear_surface_model, the
model can no longer be used. The handle SurfaceModelID becomes invalid.
Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . surface_model(-array) ; handle
Handle of the surface model.
Result
If the handle of the model is valid, the operator clear_surface_model returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON 24.11.1.0
132 CHAPTER 3 3D MATCHING

• Processed without parallelization.


This operator modifies the state of the following input parameter:
• SurfaceModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
read_surface_model, create_surface_model
See also
read_surface_model, create_surface_model
Module
3D Metrology

create_surface_model ( : : ObjectModel3D, RelSamplingDistance,


GenParamName, GenParamValue : SurfaceModelID )

Create the data structure needed to perform surface-based matching.


The operator create_surface_model creates a model for surface-based matching for the 3D object model
ObjectModel3D. The 3D object model can, for example, have been read previously from a file by using
read_object_model_3d, or been created by using xyz_to_object_model_3d. The created surface
model is returned in SurfaceModelID.
Additional parameters of the surface model can be set with set_surface_model_param after the model was
created.
The creation of the surface model requires that the 3D object model contains points and normals. The following
combinations are possible:
• points and point normals;
• points and a triangular or polygon mesh, e.g., from a CAD file;
• points and a 2D-Mapping, e.g., an XYZ image triple converted with xyz_to_object_model_3d.
Note that the direction and orientation (inward or outward) of the normals of the model are important for matching.
For edge-supported surface-based matching the normals need to point inwards and further the model must contain
a triangular or polygon mesh (see below).
The surface model is created by sampling the 3D object model with a certain distance. The sampling distance must
be specified in the parameter RelSamplingDistance and is parametrized relative to the diameter of the axis-
parallel bounding box of the 3D object model. For example, if RelSamplingDistance is set to 0.05 and the
diameter of ObjectModel3D is 10 cm, the points sampled from the object’s surface will be approximately 5 mm
apart. The sampled points are used for the approximate matching in the operator find_surface_model (see
below). The sampled points can be obtained with the operator get_surface_model_param using the value
’sampled_model’. Note that outlier points in the object model should be avoided, as they would corrupt the diame-
ter. Reducing RelSamplingDistance leads to more points, and in turn to a more stable but slower matching.
Increasing RelSamplingDistance leads to less points, and in turn to a less stable but faster matching.

(1) (2) (3) (4)


(1) Original 3D model. (2) 3D model sampled with RelSamplingDistance = 0.02. (3) RelSamplingDistance
= 0.03. (4) RelSamplingDistance = 0.05.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 133

The sampled points are used for finding the object model in a scene by using the operator
find_surface_model. For this, all possible pairs of points from the point set are examined, and the distance
and relative surface orientation of each pair is computed. Both values are discretized and stored for matching.
The generic parameters ’feat_step_size_rel’ and ’feat_angle_resolution’ can be used to set the discretization of the
distance and the orientation angles, respectively (see below).
The 3D object model is sampled a second time for the pose refinement. The second sampling is done with a
smaller sampling distance, leading to more points. The generic parameter ’pose_ref_rel_sampling_distance’ sets
the sampling distance relative to the object’s diameter. Decreasing the value results in a more accurate pose
refinement but a larger model and a slower model generation and matching. Increasing the value leads to a less
accurate pose refinement but a smaller model and faster model generation and matching (see below).
Surface-based matching can additionally use 3D edges to improve the alignment. This is particularly helpful for ob-
jects that are planar or contain larger planar sides, such that they are found in incorrect rotations or in a background
plane. In order to allow find_surface_model to also align edges, the surface model must be trained by setting
the generic parameter ’train_3d_edges’ to ’true’. In this case, the model must contain a triangular or polygon mesh
where the order of the points results in normals that point inwards. Also, the training for edge-supported surface-
based matching requires OpenGL 2.1, GLSL 1.2, and the OpenGL extensions GL_EXT_framebuffer_object and
GL_EXT_framebuffer_blit. Note that the training can take significantly longer than without edge-support.
Additionally, the model can be prepared to support view-based score computation. This is particularly helpful
for models where only a small part of the 3D object model is visible, which results in low scores if the ratio to
the total number of points is used. Accordingly, the view-based score is computed using the ratio of the matched
points to the maximum number of potentially visible model points from a certain viewpoint. In order to al-
low find_surface_model to compute a view-based score, the surface model must be trained by setting the
generic parameter ’train_view_based’ to ’true’. Similar to ’train_3d_edges’, the model must contain a triangular
or polygon mesh where the order of the points results in normals that point inwards.
Note that using noisy data for the creation of your 3D object model results in the computation of deficient surface
normals. Especially when the model is prepared for the use with 3D edges or the support of view-based score, this
can lead to unreliable scores. In order to reduce noisy 3D data you can, e.g., use smooth_object_model_3d
or simplify_object_model_3d.
The generic parameter pair GenParamName and GenParamValue are used to set additional parameters
for the model generation. GenParamName contains the tuple of parameter names that shall be set and
GenParamValue contains the corresponding values. The following values are possible for GenParamName:

’model_invert_normals’: Invert the orientation of the surface normals of the model. The normal orientation needs
to be known for the model generation. If both the model and the scene are acquired with the same setup, the
normals will already point in the same direction. If the model was loaded from a CAD file, the normals might
point into the opposite direction. If you experience the effect that the model is found on the ’outside’ of the
scene surface and the model was created from a CAD file, try to set this parameter to ’true’. Also, make
sure that the normals in the CAD file all point either outward or inward, i.e., are oriented consistently. The
normal direction is irrelevant for the pose refinement of the surface model. Therefore, if the object model is
only used with the operator refine_surface_model_pose, the value of ’model_invert_normals’ has
no effect on the result.
List of values: ’false’, ’true’
Default: ’false’
’pose_ref_rel_sampling_distance’: Set the sampling distance for the pose refinement relative to the object’s di-
ameter. Decreasing this value leads to a more accurate pose refinement but a larger model and slower model
generation and refinement. Increasing the value leads to a less accurate pose refinement but a smaller model
and faster model generation and matching.
Suggested values: 0.05, 0.02, 0.01, 0.005
Default: 0.01
Restriction: 0 < ’pose_ref_rel_sampling_distance’ < 1
’feat_step_size_rel’: Set the discretization distance of the point pair distance relative to the object’s diameter. This
value defaults to the value of RelSamplingDistance. It is not recommended to change this value. For
very noisy scenes, the value can be increased to improve the robustness of the matching against noisy points.
Suggested values: 0.1, 0.05, 0.03
Default: Value of RelSamplingDistance
Restriction: 0 < ’feat_step_size_rel’ < 1

HALCON 24.11.1.0
134 CHAPTER 3 3D MATCHING

’feat_angle_resolution’: Set the discretization of the point pair orientation as the number of subdivisions of the
angle. It is recommended to not change this value. Increasing the value increases the precision of the
matching but decreases the robustness against incorrect normal directions. Decreasing the value decreases
the precision of the matching but increases the robustness against incorrect normal directions. For very noisy
scenes where the normal directions can not be computed accurately, the value can be set to 25 or 20.
Suggested values: 20, 25, 30
Default: 30
Restriction: ’feat_angle_resolution’ > 1
’train_3d_edges’: Enable the training for edge-supported surface-based matching and refinement. In this case the
model must contain a mesh, i.e. triangles or polygons. Also, it is important that the computed normal vectors
point inwards. This parameter requires OpenGL.
List of values: ’false’, ’true’
Default: ’false’
’train_view_based’: Enable the training for view-based score computation for surface-based matching and refine-
ment. In this case the model must contain a mesh, i.e. triangles or polygons. Also, it is important that the
computed normal vectors point inwards. This parameter requires OpenGL.
List of values: ’false’, ’true’
Default: ’false’
’train_self_similar_poses’: Prepares the surface model for optimizations regarding self-similar, almost symmetric
poses. For this, poses are found under which the model is very similar to itself, i.e., poses that can be
distinguished only by very small properties of the model (such as boreholes) and that can be confused by
find_surface_model. When calling find_surface_model, it will automatically be determined
which of those self-similar poses are correct.
List of values: ’false’, ’true’
Default: ’false’

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Sampling distance relative to the object’s diameter
Default: 0.03
Suggested values: RelSamplingDistance ∈ {0.1, 0.05, 0.03, 0.02, 0.01}
Restriction: 0 < RelSamplingDistance < 1
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters.
Default: []
Suggested values: GenParamName ∈ {’model_invert_normals’, ’pose_ref_rel_sampling_distance’,
’feat_step_size_rel’, ’feat_angle_resolution’, ’train_3d_edges’, ’train_view_based’,
’train_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.02, 0.05, 0.1}
. SurfaceModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . surface_model ; handle
Handle of the surface model.
Result
create_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 135

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
surface_normals_object_model_3d
Possible Successors
find_surface_model, refine_surface_model_pose, get_surface_model_param,
write_surface_model, clear_surface_model, set_surface_model_param
Alternatives
read_surface_model
See also
find_surface_model, refine_surface_model_pose, read_surface_model,
write_surface_model, clear_surface_model, set_surface_model_param
References
Bertram Drost, Markus Ulrich, Nassir Navab, Slobodan Ilic: “Model Globally, Match Locally: Efficient and
Robust 3D Object Recognition.” Computer Vision and Pattern Recognition, pp. 998-1005, 2010.
Module
3D Metrology

deserialize_surface_model (
: : SerializedItemHandle : SurfaceModelID )

Deserialize a surface model.


deserialize_surface_model deserializes a surface model, that was serialized by
serialize_surface_model (see fwrite_serialized_item for an introduction of the basic
principle of serialization). The serialized surface model is defined by the handle SerializedItemHandle.
The deserialized values are stored in an automatically created surface model with the handle SurfaceModelID.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. SurfaceModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . surface_model ; handle
Handle of the surface model.
Result
If the parameters are valid, the operator deserialize_surface_model returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, fread_serialized_item,
receive_serialized_item, serialize_surface_model
Possible Successors
find_surface_model, refine_surface_model_pose, get_surface_model_param,
clear_surface_model, find_surface_model_image, refine_surface_model_pose_image
Alternatives
create_surface_model
See also
create_surface_model, read_surface_model, write_surface_model
Module
3D Metrology

HALCON 24.11.1.0
136 CHAPTER 3 3D MATCHING

find_surface_model ( : : SurfaceModelID, ObjectModel3D,


RelSamplingDistance, KeyPointFraction, MinScore,
ReturnResultHandle, GenParamName, GenParamValue : Pose, Score,
SurfaceMatchingResultID )

Find the best matches of a surface model in a 3D scene.


The operator find_surface_model finds the best matches of the surface model SurfaceModelID in the
3D scene ObjectModel3D and returns their pose in Pose.
The matching is divided in three steps:

1. Approximate matching
2. Sparse pose refinement
3. Dense pose refinement

These steps are described in more detail in the technical note Surface-Based Matching. The generic pa-
rameters used to control these steps are described in the respective sections below. The further paragraphs describe
the parameters and mention further points to note.
The matching process and the parameters can be visualized and inspected using the HDevelop procedure
debug_find_surface_model.
Points to Note
Matching the surface model uses points and normals of the 3D scene ObjectModel3D. The scene shall provide
one of the following options:

• points and point normals.


• points and a 2D-Mapping, e.g., an XYZ image triple converted with xyz_to_object_model_3d. In this
case the normals are calculated using the 2D-Mapping.
• points only. The normals are estimated based on the 3D neighborhood. Note, this option is not recommended,
since it generally leads to a longer processing time and additionally the computed normals are usually less
accurate, leading to less accurate results.

It is important for an accurate Pose that the normals of the scene and the model point in the same direction (see
’scene_invert_normals’).
If the model was trained for edge-supported surface-based matching and the edge-supported matching has not been
turned off via ’use_3d_edges’, only the second combination is possible, i.e., the scene must contain a 2D mapping.
If the model was trained for edge-supported surface-based matching and the scene contains a mapping, normals
contained in the input point cloud are not used (see ’scene_normal_computation’ below).
Further, for models which were trained for edge-supported surface-based matching it is necessary that the normal
vectors point inwards.
Note that triangles or polygons in the passed scene are ignored. Instead, only the vertices are used for matching. It
is thus in general not recommended to use this operator on meshed scenes, such as CAD data. Instead, such a scene
must be sampled beforehand using sample_object_model_3d to create points and normals (e.g., using the
method ’fast_compute_normals’).
When using noisy point clouds, e.g., from time-of-flight cameras, the generic parameter
’scene_normal_computation’ could be set to ’mls’ in order to obtain more robust results (see below).
Parameter Description
SurfaceModelID is the handle of the surface model. The model must have been created previously
with create_surface_model or read in with read_surface_model, respectively. Certain sur-
face model parameters influencing the matching can be set using set_surface_model_param, such as
’pose_restriction_max_angle_diff’ restricting the allowed range of rotations.
ObjectModel3D is the handle of the 3D object model containing the scene in which the matches are searched.
Note that in most cases, it is assumed the scene was observed from a camera looking along the z-axis. This is
important to align the scene normals if they are re-computed (see ’scene_normal_computation’ below). In contrast,
when the model was trained for edge-supported surface-based matching and the scene contains a mapping, normals
are automatically aligned consistently.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 137

The parameter RelSamplingDistance controls the sampling distance during the step Approximate
matching and the Score calculation during the step Sparse pose refinement. Its value is given rela-
tive to the diameter of the surface model. Decreasing RelSamplingDistance leads to more sampled points,
and in turn to a more stable but slower matching. Increasing RelSamplingDistance reduces the number of
sampled scene points, which leads to a less stable but faster matching. For an illustration showing different values
for RelSamplingDistance, please refer to the operator create_surface_model. The sampled scene
points can be retrieved for a visual inspection using the operator get_surface_matching_result. For a
robust matching it is recommended that at least 50-100 scene points are sampled for each object instance.
The parameter KeyPointFraction controls how many points out of the sampled scene points are selected
as key points. For example, if the value is set to 0.1, 10% of the sampled scene points are used as key points.
For stable results it is important that each instance of the object is covered by several key points. Increasing
KeyPointFraction means that more key points are selected from the scene, resulting in a slower but more
stable matching. Decreasing KeyPointFraction has the inverse effect and results in a faster but less stable
matching. The operator get_surface_matching_result can be used to retrieve the selected key points for
visual inspection.
The parameter MinScore can be used to filter the results. Only matches with a score exceeding the value of
MinScore are returned. If MinScore is set to zero, all matches are returned.
For edged-supported surface-based matching (see create_surface_model) four different sub-scores are de-
termined (see their explanation below). For surface-based matching models where view-based score computation
is trained (see create_surface_model), an additional fifth sub-score is determined. As a consequence, you
can filter the results based on each of them by passing a tuple with up to five threshold values to MinScore. These
threshold values are sorted in the order of the scores (see below) and missing entries are regarded as 0, meaning no
filtering based on this sub-score. To find suitable values for the thresholds, the corresponding sub-scores of found
object instances can be obtained using get_surface_matching_result. Depending on the settings, not all
sub-scores might be available. The thresholds for unavailable sub-scores are ignored. The five sub-scores, whose
threshold values have to be passed in exactly this order in MinScore, are:

1. The overall score as returned in Score and through ’score’ by get_surface_matching_result,


2. the surface fraction of the score, i.e., how much of the object’s surface was detected in the scene, returned
through ’score_surface’ by get_surface_matching_result,
3. the 3D edge fraction of the score, i.e., how well the 3D edges of the object silhouette are aligned with the 3D
edges detected in the scene returned through ’score_3d_edges’ by get_surface_matching_result,
4. the 2D edge fraction of the score, i.e., how well the object silhouette projected into the images aligns with
edges detected in the images (available only for the operators find_surface_model_image
and refine_surface_model_pose_image), returned through ’score_2d_edges’ by
get_surface_matching_result, and
5. the view-based score, i.e., how many model points were detected in the scene, in relation to how many of the
object points are potentially visible from the determined viewpoint, returned through ’score_view_based’ by
get_surface_matching_result.

The parameter ReturnResultHandle determines if a surface matching result handle is returned or not. If the
parameter is set to ’true’, the handle is returned in the parameter SurfaceMatchingResultID. Additional
details of the matching process can be queried with the operator get_surface_matching_result using
that handle.
The parameters GenParamName and GenParamValue are used to set generic parameters. Both get a tuple
of equal length, where the tuple passed to GenParamName contains the names of the parameters to set, and the
tuple passed to GenParamValue contains the corresponding values. The possible parameter names and values
are described in the paragraph The three steps of the matching.
The output parameter Pose gives the 3D poses of the found object instances. For every found instance of the
surface model its pose is given in the scene coordinate system, thus the pose is in the form scs Pmcs , where scs
denote the coordinate system of the scene (which often is identical with the coordinate system of the sensor, the
camera coordinate system) and mcs the model coordinate system (which is a 3D world coordinate system), see
Transformations / Poses and “Solution Guide III-C - 3D Vision”. Thereby, the pose refers to the
original coordinate system of the 3D object model that was passed to create_surface_model.
The output parameter Score returns a score for each match. Its value and interpretation differs for the cases
distinguished below.

HALCON 24.11.1.0
138 CHAPTER 3 3D MATCHING

• With pose refinement


For a matching with pose refinement, the score depends on whether edge-support was activated:
– Without edge-support, compute the surface fraction, i.e. the approximate fraction of the object’s surface
that is visible in the scene. This is done by counting the number of model points that have a correspond-
ing scene point and dividing this number either by:
* the total number of points on the model, if the surface-based model is not prepared for view-based
score computation
or by:
* the maximum number of potentially visible model points based on the current viewpoint, if the
surface-based model is prepared for view-based score computation.
0 ≤ Score ≤ 1
– With edge-support, compute the geometric mean of the surface fraction and the edge fraction. The
surface fraction is affected by whether the surface-based model is prepared for view-based score com-
putation or not, as explained above. The edge fraction is the number of points from the sampled model
edges that are aligned with edges of the scene, divided by the maximum number of potentially visible
points of edges on the model. Note that if the edges are extracted from multiple viewpoints, this might
lead to score greater than 1.
0 ≤ Score ≤ 1 (if the scene was acquired from one single viewpoint)
0 ≤ Score ≤ N (if the scene was merged from scenes that were acquired from N different viewpoints)
Note that for the computation of the score after the sparse pose refinement, the sampled scene points are used.
For the computation of the score after the dense pose refinement, all scene points are used. Therefore, after
the dense pose refinement, the score values does not depend on the sampling distance of the scene.
• Without pose refinement
If only the first step, Approximate Matching, out of the three steps described in The three steps
of the matching takes place, the possible score value and interpretation only differs whether there is
edge-support or not:
– Without edge-support:
The score is the approximate number of points from the subsampled scene that lie on the found object.
Score ≥ 0
– With edge-support:
The score is the approximate number of points from the subsampled scene that lie on the found object
multiplied with the number of points from the sampled scene edges that are aligned with edges of the
model.
Score ≥ 0

The output parameter SurfaceMatchingResultID returns a handle for the surface matching re-
sult. Using this handle, additional details of the matching process can be queried with the operator
get_surface_matching_result. Note, that in order to return the handle, ReturnResultHandle has
to be set to ’true’.
The Three Steps of the Matching
The matching is divided into three steps:

1. Approximate matching The approximate poses of the instances of the surface model in the scene are searched.
The following generic parameters control the approximate matching and can be set with GenParamName
and GenParamValue:
’num_matches’: Sets the maximum number of matches that are returned.
Suggested values: 1, 2, 5
Default: 1
Restriction: ’num_matches’ > 0
’max_overlap_dist_rel’: For efficiency reasons, the maximum overlap can not be defined in 3D. Instead,
only the minimum distance between the centers of the axis-aligned bounding boxes of two matches can
be specified with ’max_overlap_dist_rel’. The value is set relative to the diameter of the object. Once
an object with a high Score is found, all other matches are suppressed if the centers of their bounding
boxes lie too close to the center of the first object. If the resulting matches must not overlap, the value
for ’max_overlap_dist_rel’ should be set to 1.0.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 139

Note that only one of the parameters ’max_overlap_dist_rel’ and ’max_overlap_dist_abs’ should be set.
If both are set, only the value of the last modified parameter is used.
Suggested values: 0.1, 0.5, 1
Default: 0.5
Restriction: ’max_overlap_dist_rel’ >= 0
’max_overlap_dist_abs’: This parameter has the same effect as the parameter ’max_overlap_dist_rel’. Note
that in contrast to ’max_overlap_dist_rel’, the value for ’max_overlap_dist_abs’ is set as an absolute
value. See ’max_overlap_dist_rel’ above, for a description of the effect of this parameter.
Note that only one of the parameters ’max_overlap_dist_rel’ and ’max_overlap_dist_abs’ should be set.
If both are set, only the value of the last modified parameter is used.
Suggested values: 1, 2, 3
Restriction: ’max_overlap_dist_abs’ >= 0
’scene_normal_computation’: This parameter controls the normal computation of the sampled scene.
In the default mode ’fast’, in most cases normals from the 3D scene are used (if it already contains
normals) or computed based on a small neighborhood of points (if not). The computed normals n are
then oriented such that nz ≥ 0 in case no original normals exist. This orientation of nz ≥ 0 implies the
assumption that the scene was observed from a camera looking along the z-axis.
In the default mode ’fast’, in case the model was trained for edge-supported surface-based matching and
the scene contains a mapping, input normals are not used and normals are always computed from the
mapping contained in the 3D scene. Further, the computed normals are oriented inwards consistently
with respect to the mapping.
In the mode ’mls’, normals are recomputed based on a larger neighborhood and using the more complex
but often more accurate ’mls’ method. A more detailed description of the ’mls’ method can be found
in the description of the operator surface_normals_object_model_3d. The ’mls’ mode is in-
tended for noisy data, such as images from time-of-flight cameras. The recomputed normals are oriented
as the normals in mode ’fast’.
List of values: ’fast’, ’mls’
Default: ’fast’
’scene_invert_normals’: Invert the orientation of the surface normals of the scene. The orientation of surface
normals of the scene have to match with the orientation of the model. If both the model and the scene are
acquired with the same setup, the normals will already point in the same direction. If you experience the
effect that the model is found on the ’outside’ of the scene surface, try to set this parameter to ’true’. Also,
make sure that the normals in the scene all point either outward or inward, i.e., are oriented consistently.
For edge-supported surface-based matching, the normal vectors have to point inwards, but typically are
automatically generated flipped inwards consistently with respect to the mapping. The orientation of the
normals can be inspected using the procedure debug_find_surface_model.
List of values: ’false’, ’true’
Default: ’false’
’3d_edges’: Allows to manually set the 3D scene edges for edge-supported surface-based matching, i.e. if
the surface model was created with ’train_3d_edges’ enabled. The parameter must be a 3D object model
handle. The edges are usually a result of the operator edges_object_model_3d but can further
be filtered in order to remove outliers. If this parameter is not given, find_surface_model will
internally extract the edges similar to the operator edges_object_model_3d.
’3d_edge_min_amplitude_rel’: Sets the threshold when extracting 3D edges for edge-supported surface-
based matching, i.e. if the surface model was created with ’train_3d_edges’ enabled. The threshold
is set relative to the diameter of the object. Note that if edges were passed manually with the generic
parameter ’3d_edges’, this parameter is ignored. Otherwise, it behaves identically to the parameter
MinAmplitude of operator edges_object_model_3d.
Suggested values: 0.05, 0.1, 0.5
Default: 0.05
Restriction: ’3d_edge_min_amplitude_rel’ >= 0
’3d_edge_min_amplitude_abs’: Similar to ’3d_edge_min_amplitude_rel’, however, the value is given as ab-
solute distance and not relative to the object diameter.
Restriction: ’3d_edge_min_amplitude_abs’ >= 0
’viewpoint’: This parameter specifies the viewpoint from which the 3D data is seen. It is used for surface
models that are prepared for view-based score computation (i.e. with ’train_view_based’ enabled) to get
the maximum number of potentially visible points of the model based on the current viewpoint. For this,
GenParamValue must contain a string consisting of the three coordinates (x, y, and z) of the view-

HALCON 24.11.1.0
140 CHAPTER 3 3D MATCHING

point, separated by spaces. The viewpoint is defined in the same coordinate frame as ObjectModel3D
and should roughly correspond to the position the scene was acquired from. A visualization of the
viewpoint can be created using the procedure debug_find_surface_model in order to inspect its
position.
Default: ’0 0 0’
’max_gap’: Gaps in the 3D data are closed, as far as they do not exceed the maximum gap size ’max_gap’
[pixels] and the surface model was created with ’train_3d_edges’ enabled. Larger gaps will contain
edges at their boundary, while gaps smaller than this value will not. This suppresses edges around
smaller patches that were not reconstructed by the sensor as well as edges at the more distant part of a
discontinuity. For sensors with very large resolutions, the value should be increased to avoid spurious
edges. Note that if edges were passed manually with the generic parameter ’3d_edges’, this param-
eter is ignored. Otherwise, it behaves identically to the parameter GenParamName of the operator
edges_object_model_3d when ’max_gap’ is set.
The influence of ’max_gap’ can be inspected using the procedure debug_find_surface_model.
Default: 30
’use_3d_edges’: Turns the edge-supported matching on or off. This can be used to perform matching without
3D edges, even though the model was created for edge-supported matching. If the model was not created
for edge-supported surface-based matching, an error is returned.
List of values: ’true’, ’false’
Default: ’true’
2. Sparse pose refinement In this second step, the approximate poses found in the previous step are further re-
fined. This increases the accuracy of the poses and the significance of the score value.
The following generic parameters control the sparse pose refinement and can be set with GenParamName
and GenParamValue:
’sparse_pose_refinement’: Enables or disables the sparse pose refinement.
List of values: ’true’, ’false’
Default: ’true’
’pose_ref_use_scene_normals’: Enables or disables the usage of scene normals for the pose refinement. If
this parameter is enabled, and if the scene contains point normals, then those normals are used to increase
the accuracy of the pose refinement. For this, the influence of scene points whose normal points in a
different direction than the model normal is decreased. Note that the scene must contain point normals.
Otherwise, this parameter is ignored.
List of values: ’true’, ’false’
Default: ’false’
’use_view_based’: Turns the view-based score computation for surface-based matching on or off. This can
be used to perform matching without using the view-based score, even though the model was prepared
for view-based score computation. The influence of ’use_view_based’ on the score is explained in the
documentation of Score above.
If the model was not prepared for view-based score computation, an error is returned.
List of values: ’true’, ’false’
Default: ’false’, if ’train_view_based’ was disabled when creating the model, otherwise ’true’.
3. Dense pose refinement Accurately refines the poses found in the previous steps.
The following generic parameters influence the accuracy and speed of the dense pose refinement and can be
set with GenParamName and GenParamValue:
’dense_pose_refinement’: Enables or disables the dense pose refinement.
List of values: ’true’, ’false’
Default: ’true’
’pose_ref_num_steps’: Number of iterations for the dense pose refinement. Increasing the number of itera-
tion leads to a more accurate pose at the expense of runtime. However, once convergence is reached, the
accuracy can no longer be increased, even if the number of steps is increased. Note that this parameter
is ignored if the dense pose refinement is disabled.
Suggested values: 1, 3, 5, 20
Default: 5
Restriction: ’pose_ref_num_steps’ > 0
’pose_ref_sub_sampling’: Set the rate of scene points to be used for the dense pose refinement. For example,
if this value is set to 5, every 5th point from the scene is used for pose refinement. This parameter allows
an easy trade-off between speed and accuracy of the pose refinement: Increasing the value leads to less

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 141

points being used and in turn to a faster but less accurate pose refinement. Decreasing the value has the
inverse effect. Note that this parameter is ignored if the dense pose refinement is disabled.
Suggested values: 1, 2, 5, 10
Default: 2
Restriction: ’pose_ref_sub_sampling’ > 0
’pose_ref_dist_threshold_rel’: Set the distance threshold for dense pose refinement relative to the diameter
of the surface model. Only scene points that are closer to the object than this distance are used for the
optimization. Scene points further away are ignored.
Note that only one of the parameters ’pose_ref_dist_threshold_rel’ and ’pose_ref_dist_threshold_abs’
should be set. If both are set, only the value of the last modified parameter is used. Note that this
parameter is ignored if the dense pose refinement is disabled.
Suggested values: 0.03, 0.05, 0.1, 0.2
Default: 0.1
Restriction: ’pose_ref_dist_threshold_rel’ > 0
’pose_ref_dist_threshold_abs’: Set the distance threshold for dense pose refinement as an absolute value.
See ’pose_ref_dist_threshold_rel’ for a detailed description.
Note that only one of the parameters ’pose_ref_dist_threshold_rel’ and ’pose_ref_dist_threshold_abs’
should be set. If both are set, only the value of the modified last parameter is used.
Restriction: ’pose_ref_dist_threshold_abs’ > 0
’pose_ref_scoring_dist_rel’: Set the distance threshold for scoring relative to the diameter of the surface
model. See the following ’pose_ref_scoring_dist_abs’ for a detailed description.
Note that only one of the parameters ’pose_ref_scoring_dist_rel’ and ’pose_ref_scoring_dist_abs’
should be set. If both are set, only the value of the last modified parameter is used. Note that this
parameter is ignored if the dense pose refinement is disabled.
Suggested values: 0.2, 0.01, 0.005, 0.0001
Default: 0.005
Restriction: ’pose_ref_scoring_dist_rel’ > 0
’pose_ref_scoring_dist_abs’: Set the distance threshold for scoring. Only scene points that are closer to the
object than this distance are considered to be ’on the model’ when computing the score after the pose
refinement. All other scene points are considered not to be on the model. The value should correspond
to the amount of noise on the coordinates of the scene points. Note that this parameter is ignored if the
dense pose refinement is disabled.
Note that only one of the parameters ’pose_ref_scoring_dist_rel’ and ’pose_ref_scoring_dist_abs’
should be set. If both are set, only the value of the last modified parameter is used.
’pose_ref_use_scene_normals’: Enables or disables the usage of scene normals for the pose refinement. This
parameter is explained in more details in the section Sparse pose refinement above.
List of values: ’true’, ’false’
Default: ’false’
’pose_ref_dist_threshold_edges_rel’: Set the distance threshold of edges for dense pose refinement relative
to the diameter of the surface model. Only scene edges that are closer to the object edges than this
distance are used for the optimization. Scene edges further away are ignored.
Note that only one of the parameters ’pose_ref_dist_threshold_edges_rel’ and
’pose_ref_dist_threshold_edges_abs’ should be set. If both are set, only the value of the last
modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled
or if no edge-supported surface-based matching is used.
Suggested values: 0.03, 0.05, 0.1, 0.2
Default: 0.1
Restriction: ’pose_ref_dist_threshold_edges_rel’ > 0
’pose_ref_dist_threshold_edges_abs’: Set the distance threshold of edges for dense pose refinement as an
absolute value. See ’pose_ref_dist_threshold_edges_rel’ for a detailed description.
Note that only one of the parameters ’pose_ref_dist_threshold_edges_rel’ and
’pose_ref_dist_threshold_edges_abs’ should be set. If both are set, only the value of the last
modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled
or if no edge-supported surface-based matching is used.
Restriction: ’pose_ref_dist_threshold_edges_abs’ > 0
’pose_ref_scoring_dist_edges_rel’: Set the distance threshold of edges for scoring relative to the diameter
of the surface model. See the following ’pose_ref_scoring_dist_edges_abs’ for a detailed description.
Note that only one of the parameters ’pose_ref_scoring_dist_edges_rel’ and

HALCON 24.11.1.0
142 CHAPTER 3 3D MATCHING

’pose_ref_scoring_dist_edges_abs’ should be set. If both are set, only the value of the last modi-
fied parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if
no edge-supported surface-based matching is used.
Suggested values: 0.2, 0.01, 0.005, 0.0001
Default: 0.005
Restriction: ’pose_ref_scoring_dist_edges_rel’ > 0
’pose_ref_scoring_dist_edges_abs’: Set the distance threshold of edges for scoring as an absolute value.
Only scene edges that are closer to the object edges than this distance are considered to be ’on the
model’ when computing the score after the pose refinement. All other scene edges are considered not to
be on the model. The value should correspond to the expected inaccuracy of the extracted scene edges
and the inaccuracy of the refined pose.
Note that only one of the parameters ’pose_ref_scoring_dist_edges_rel’ and
’pose_ref_scoring_dist_edges_abs’ should be set. If both are set, only the value of the last modi-
fied parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if
no edge-supported surface-based matching is used.
Restriction: ’pose_ref_scoring_dist_edges_abs’ > 0
’use_view_based’: Turns the view-based score computation for surface-based matching on or off. For further
details, see the respective description in the section about the sparse pose refinement above.
If the model was not prepared for view-based score computation, an error is returned.
List of values: ’true’, ’false’
Default: ’false’, if ’train_view_based’ was disabled when creating the model, otherwise ’true’.
’use_self_similar_poses’: Turns the optimization regarding self-similar, almost symmetric poses on or off.
If the model was not created with activated parameter ’train_self_similar_poses’, an error is returned
when setting ’use_self_similar_poses’ to ’true’.
List of values: ’true’, ’false’
Default: ’false’, if ’train_self_similar_poses’ was disabled when creating the model, otherwise ’true’.

Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scene sampling distance relative to the diameter of the surface model.
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.07, 0.05, 0.04, 0.03}
Restriction: 0 < RelSamplingDistance < 1
. KeyPointFraction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Fraction of sampled scene points used as key points.
Default: 0.2
Suggested values: KeyPointFraction ∈ {0.3, 0.2, 0.1, 0.05}
Restriction: 0 < KeyPointFraction <= 1
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Minimum score of the returned poses.
Default: 0
Restriction: MinScore >= 0
. ReturnResultHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable returning a result handle in SurfaceMatchingResultID.
Default: ’false’
Suggested values: ReturnResultHandle ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’num_matches’, ’max_overlap_dist_rel’, ’max_overlap_dist_abs’,
’sparse_pose_refinement’, ’dense_pose_refinement’, ’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_rel’,
’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’, ’scene_normal_computation’,
’scene_invert_normals’, ’3d_edge_min_amplitude_rel’, ’3d_edge_min_amplitude_abs’, ’viewpoint’,

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 143

’max_gap’, ’3d_edges’, ’pose_ref_dist_threshold_edges_rel’, ’pose_ref_dist_threshold_edges_abs’,


’pose_ref_scoring_dist_edges_rel’, ’pose_ref_scoring_dist_edges_abs’, ’use_3d_edges’, ’use_view_based’,
’use_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.03, 0.05, 0.1,
’num_scene_points’, ’model_point_fraction’, ’num_model_points’, ’fast’, ’mls’}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D pose of the surface model in the scene.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Score of the found instances of the surface model.
. SurfaceMatchingResultID (output_control) . . . . . . . . . . . . . surface_matching_result(-array) ; handle
Handle of the matching result, if enabled in ReturnResultHandle.
Result
find_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception is
raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_surface_model, create_surface_model, get_surface_model_param,
edges_object_model_3d
Possible Successors
refine_surface_model_pose, get_surface_matching_result,
clear_surface_matching_result, clear_object_model_3d
Alternatives
refine_surface_model_pose, find_surface_model_image,
refine_surface_model_pose_image
See also
refine_surface_model_pose, find_surface_model_image
Module
3D Metrology

find_surface_model_image ( Image : : SurfaceModelID,


ObjectModel3D, RelSamplingDistance, KeyPointFraction, MinScore,
ReturnResultHandle, GenParamName, GenParamValue : Pose, Score,
SurfaceMatchingResultID )

Find the best matches of a surface model in a 3D scene and images.


The operator find_surface_model_image finds the best matches of the surface model SurfaceModelID
in the scene that is comprised of the 3D surface in ObjectModel3D and the images of the scene in
Image. Note that the number of images passed in Image must correspond to the number of cameras
set with set_surface_model_param. Note also that the surface model must have been created by
create_surface_model with the parameter ’train_3d_edges’ enabled.
The images are used only in the sparse and dense refinement step. For this, the refinement simultaneously optimizes
the alignment of the model with the 3D scene as well as the alignment of the reprojected edges of the model
silhouette with edges in the passed images. The domain of the images is ignored.

HALCON 24.11.1.0
144 CHAPTER 3 3D MATCHING

In addition to the parameters documented in find_surface_model, find_surface_model_image also


supports the following generic parameters:

’min_contrast’: Sets the minimum contrast of the object in the search images. Edges with a contrast below this
threshold are ignored in the refinement.
Suggested values: 5, 10, 20
Default: 10
Restriction: ’min_contrast’ >= 0
’max_deformation’: Sets the search range in pixels for corresponding edges in the image. This parameter
can be used if the shape of the object is slightly deformed compared to the original 3D model used in
create_surface_model. Note that increasing this parameter can have a significant impact on the run-
time of the refinement.
Suggested values: 0, 1, 5
Default: 1
Restriction: ’max_deformation’ >= 0

Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / uint2
Images of the scene.
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scene sampling distance relative to the diameter of the surface model.
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.07, 0.05, 0.04, 0.03}
Restriction: 0 < RelSamplingDistance < 1
. KeyPointFraction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Fraction of sampled scene points used as key points.
Default: 0.2
Suggested values: KeyPointFraction ∈ {0.3, 0.2, 0.1, 0.05}
Restriction: 0 < KeyPointFraction <= 1
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Minimum score of the returned poses.
Default: 0
Restriction: MinScore >= 0
. ReturnResultHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable returning a result handle in SurfaceMatchingResultID.
Default: ’false’
Suggested values: ReturnResultHandle ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’num_matches’, ’max_overlap_dist_rel’, ’max_overlap_dist_abs’,
’sparse_pose_refinement’, ’dense_pose_refinement’, ’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_rel’,
’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’, ’scene_normal_computation’,
’scene_invert_normals’, ’3d_edge_min_amplitude_rel’, ’3d_edge_min_amplitude_abs’, ’viewpoint’,
’max_gap’, ’3d_edges’, ’max_deformation’, ’min_contrast’, ’use_3d_edges’, ’use_view_based’,
’use_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.03, 0.05, 0.1,
’num_scene_points’, ’model_point_fraction’, ’num_model_points’, ’fast’, ’mls’}

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 145

. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer


3D pose of the surface model in the scene.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Score of the found instances of the surface model.
. SurfaceMatchingResultID (output_control) . . . . . . . . . . . . . surface_matching_result(-array) ; handle
Handle of the matching result, if enabled in ReturnResultHandle.
Result
find_surface_model_image returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an ex-
ception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_surface_model, create_surface_model, get_surface_model_param,
edges_object_model_3d
Possible Successors
refine_surface_model_pose, get_surface_matching_result,
clear_surface_matching_result, clear_object_model_3d
Alternatives
refine_surface_model_pose, find_surface_model, refine_surface_model_pose_image
See also
refine_surface_model_pose, find_surface_model
Module
3D Metrology

get_surface_matching_result ( : : SurfaceMatchingResultID,
ResultName, ResultIndex : ResultValue )

Get details of a result from surface based matching.


The operator get_surface_matching_result returns details about the results of surface based matching
or the surface pose refinement. The results are stored in SurfaceMatchingResultID, which must have been
created by find_surface_model or refine_surface_model_pose.
The parameter ResultName is used to select which result detail shall be returned. If details about one of the
results shall be retrieved, ResultIndex selects the result index, where 0 selects the first result. ResultIndex
is ignored for certain values of ResultName.
The following values are possible for ResultName if SurfaceMatchingResultID was created by
find_surface_model or find_surface_model_image:

’sampled_scene’: A 3D object model handle is returned that contains the sampled scene points that were used
in the approximate matching step. This is helpful for tuning the sampling distance for the matching (see
parameter RelSamplingDistance of operator find_surface_model). The parameter ResultIndex is
ignored.
’key_points’: A 3D object model handle is returned that contains all points from the 3D scene that were used
as key points in the matching process. This is helpful for tuning the sampling distance and key point rate
for the matching (see parameter KeyPointFraction of operator find_surface_model). The parameter
ResultIndex is ignored. At least 10 key points should be on the object of interest for stable results.

HALCON 24.11.1.0
146 CHAPTER 3 3D MATCHING

’score_unrefined’: The score of the result before the dense pose refinement is returned. If the sparse pose
refinement was disabled, this is the score of the approximate matching. Otherwise the score of the
sparse pose refinement is returned. See find_surface_model for details about the score. In
ResultIndex the index of the result must be specified. If SurfaceMatchingResultID was created
by refine_surface_model_pose, 0 is returned.
’sampled_3d_edges’: If the surface model was trained with ’train_3d_edges’ enabled, a 3D object model handle
is returned that contains the sampled 3D edge points that were used in the approximate matching step and in
the sparse refinement step. The parameter ResultIndex is ignored.

The following values are always possible for ResultName, regardless the operator
SurfaceMatchingResultID was created with:

’pose’: Returns the pose of the matching or refinement result. In ResultIndex the index of the result must be
specified.
’score_refined’: Returns the score of the result after the dense pose refinement. See find_surface_model
for details about this score. In ResultIndex the index of the result must be specified. If
SurfaceMatchingResultID was created by find_surface_model and dense pose refinement was
disabled, 0 is returned.
’score’: Returns the combined score of the result indexed in ResultIndex, thus this parameter is equal to
Score returned in find_surface_model.
’score_surface’: Returns the surface-based score of the result indexed in ResultIndex. If not specifically set
otherwise, this score is equal to ’score_refined’.
’score_3d_edges’: Returns the 3D edge score of the result indexed in ResultIndex. This score is only appli-
cable for edged-supported surface-based matching.
’score_2d_edges’: Returns the 2D edge score of the result indexed in ResultIndex. This score is only appli-
cable for edged-supported surface-based matching.
’score_view_based’: Returns the view-based score of the result indexed in ResultIndex. This score is only
applicable if the surface model supports view-based score computation.
’all_scores’: Returns for the result indexed in ResultIndex the values of the five scores ’score’,
’score_surface’, ’score_3d_edges’, ’score_2d_edges’, and ’score_view_based’. Thereby the scores have the
same order as the thresholds given through the parameter MinScore in the matching and refinement opera-
tors.

Parameters
. SurfaceMatchingResultID (input_control) . . . . . . . . . . . . . . . . . . . . . surface_matching_result ; handle
Handle of the surface matching result.
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the result property.
Default: ’pose’
List of values: ResultName ∈ {’sampled_scene’, ’key_points’, ’pose’, ’score_unrefined’, ’score_refined’,
’sampled_3d_edges’, ’score’, ’score_surface’, ’score_3d_edges’, ’score_2d_edges’, ’score_view_based’,
’all_scores’}
. ResultIndex (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the matching result, starting with 0.
Default: 0
Suggested values: ResultIndex ∈ {0, 1, 2, 3}
Restriction: ResultIndex >= 0
. ResultValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string / real / handle
Value of the result property.
Result
If the handle of the result is valid, the operator get_surface_matching_result returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 147

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
Possible Predecessors
find_surface_model, refine_surface_model_pose
Possible Successors
clear_surface_model
See also
find_surface_model, refine_surface_model_pose, read_surface_model,
write_surface_model, clear_surface_model
Module
3D Metrology

get_surface_model_param ( : : SurfaceModelID,
GenParamName : GenParamValue )

Return the parameters and properties of a surface model.


The operator get_surface_model_param returns parameters and properties of the surface model
SurfaceModelID. The surface model must have been created by create_surface_model or
read_surface_model. The names of the desired properties are passed in the generic parameter
GenParamName, the corresponding values are returned in GenParamValue.
The following values are possible for GenParamName:

’diameter’: Diameter of the model point cloud. The diameter is the length of the diagonal of the axis-parallel
bounding box (see parameter ’bounding_box1’).
’center’: Center point of the model. The center point is the center of the axis-parallel bounding box (see parameter
’bounding_box1’).
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z).
’sampled_model’: The 3D points sampled from the model for matching. This returns an ObjectModel3D that
contains all points sampled from the model surface for matching.
’sampled_pose_refinement’: The 3D model points subsampled from the model for the pose refinement. This
returns an ObjectModel3D that contains all points sampled from the model surface for pose refinement.
’3d_edges_trained’: Returns if the surface model was prepared for edge-supported surface-based matching, i.e.,
if the parameter ’train_3d_edges’ was enabled in create_surface_model. The returned value is either
’true’ or ’false’.
’view_based_trained’: Returns if the surface model was prepared to support view-based score com-
putation for surface-based matching, i.e., if the parameter ’train_view_based’ was enabled in
create_surface_model. The returned value is either ’true’ or ’false’.
’camera_parameter’:
’camera_parameter X’: Returns the camera parameters for camera number X, where X is a zero-based index for
the cameras. If not given, X defaults zero (first camera). The camera parameters must previously have been
set by set_surface_model_param.
’camera_pose’:
’camera_pose X’: Returns the camera pose for camera number X, where X is a zero-based index for the cameras.
If not given, X defaults zero (first camera).
’symmetry_axis_direction’:
’symmetry_axis_origin’: Returns the symmetry axis or origin, respectively, as set with
set_surface_model_param. If no axis is set, an empty tuple is returned.
’symmetry_poses’: Returns the symmetry poses as set with set_surface_model_param.
’symmetry_poses_all’: Returns all symmetry poses created by set_surface_model_param based on the
symmetry poses set with set_surface_model_param.

HALCON 24.11.1.0
148 CHAPTER 3 3D MATCHING

’pose_restriction_reference_pose’: Returns the reference pose as set with set_surface_model_param, or


an empty tuple if not set.
’pose_restriction_max_angle_diff’: Returns the maximum angular difference between the reference pose and
found poses, in radians, or an empty tuple if not set.
’pose_restriction_allowed_axis_direction’:
’pose_restriction_allowed_axis_origin’: Returns the allowed rotation axis and origin, respectively, as set with
set_surface_model_param. If no axis is set, an empty tuple is returned.
’pose_restriction_filter_final_poses_only’: Returns ’true’ if only the final poses are filtered, or ’false’ if the poses
are filtered during the matching process (default).
’self_similar_poses_trained’: Returns if the surface model was prepared for optimizations regarding self-
similar, almost symmetric poses, i.e., if the parameter ’train_self_similar_poses’ was enabled in
create_surface_model. The returned value is either ’true’ or ’false’.
’sampled_self_similarity’: Returns an ObjectModel3D that contains those 3D points of the model that were sam-
pled for the search of self-similar poses.
’self_similar_poses’: Returns the poses under which the object is self-similar, i.e., almost symmetric. If the param-
eter ’train_self_similar_poses’ was not enabled in create_surface_model, an empty tuple is returned.
’self_similar_poses_models’: Returns a tuple of ObjectModel3Ds that contains a copy of the original model,
transformed into the poses returned by ’self_similar_poses’. This allows for a visual inspection of the self-
similar poses. This parameter is only available if the surface model was created with activated parameter
’train_self_similar_poses’.

Parameters

. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle


Handle of the surface model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of the parameter.
Default: ’diameter’
List of values: GenParamName ∈ {’diameter’, ’center’, ’bounding_box1’, ’sampled_model’,
’sampled_pose_refinement’, ’3d_edges_trained’, ’camera_parameter’, ’camera_pose’,
’symmetry_axis_direction’, ’symmetry_axis_origin’, ’symmetry_poses’, ’symmetry_poses_all’,
’pose_restriction_reference_pose’, ’pose_restriction_max_angle_diff’,
’pose_restriction_allowed_axis_direction’, ’pose_restriction_allowed_axis_origin’,
’pose_restriction_filter_final_poses_only’, ’view_based_trained’, ’self_similar_poses_trained’,
’sampled_self_similarity’, ’self_similar_poses’, ’self_similar_poses_models’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . attribute.value(-array) ; real / string / integer / handle
Value of the parameter.
Result
get_surface_model_param returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
create_surface_model, read_surface_model
Possible Successors
find_surface_model, refine_surface_model_pose, write_surface_model
See also
create_surface_model, set_surface_model_param
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 149

read_surface_model ( : : FileName : SurfaceModelID )

Read a surface model from a file.


The operator read_surface_model reads the surface model, which has been written with
write_surface_model, from the file FileName. The handle of the surface model is returned in
SurfaceModelID. If no absolute path is given in FileName, the file is searched in the current directory of
the HALCON process. The default HALCON file extension for the surface model (SFM) file is ’sfm’. If no file
named FileName exists, the default file extension is appended to FileName.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of the SFM file.
File extension: .sfm
. SurfaceModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . surface_model ; handle
Handle of the read surface model.
Result
read_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct and the file can be read. If the
file is not a surface model file, the error 9506 is raised. If the file has a version that can not be read by this version
of HALCON, the error 9507 is raised. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
find_surface_model, refine_surface_model_pose, get_surface_model_param,
clear_surface_model, find_surface_model_image, refine_surface_model_pose_image
Alternatives
create_surface_model
See also
create_surface_model, write_surface_model
Module
3D Metrology

refine_surface_model_pose ( : : SurfaceModelID, ObjectModel3D,


InitialPose, MinScore, ReturnResultHandle, GenParamName,
GenParamValue : Pose, Score, SurfaceMatchingResultID )

Refine the pose of a surface model in a 3D scene.


The operator refine_surface_model_pose refines the approximate pose InitialPose of the surface
model SurfaceModelID in the 3D scene ObjectModel3D. The surface model SurfaceModelID must
have been created previously with create_surface_model or read_surface_model. Additionally,
set_surface_model_param can be used to set certain parameters that influence the refinement, such as
restricting the allowed range of rotations.
refine_surface_model_pose is useful if the pose of an object in a scene is approximately known and only
needs to be refined. The refined pose is returned in Pose, along with a score in Score. It is possible to pass
multiple poses for refinement. Note that, contrary to find_surface_model, the returned poses are not sorted
by their score but are returned in the same order as the input poses.

HALCON 24.11.1.0
150 CHAPTER 3 3D MATCHING

The maximum possible error in the approximate pose that can still be refined depends on the type of object, the
amount of clutter in the scene and the visible parts of the objects. In general, differences in the orientation of up to
15° and differences in the position of up to 10% can be refined.
The accuracy of the pose refinement is limited to around 0.1% of the model’s size due to numerical reasons. The
accuracy further depends on the noise of the scene points, the number of scene points and the shape of the model.
Details about the pose refinement and the parameters are described in the documentation of
find_surface_model in the section about the dense pose refinement step. The following generic parameters
can be set for refine_surface_model_pose, and are also documented in find_surface_model:
’pose_ref_num_steps’, ’pose_ref_sub_sampling’, ’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’,
’pose_ref_scoring_dist_rel’, ’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’,
’3d_edge_min_amplitude_rel’, ’3d_edge_min_amplitude_abs’, ’3d_edges’, ’use_3d_edges’, ’use_view_based’,
’use_self_similar_poses’, ’pose_ref_dist_threshold_edges_rel’, ’pose_ref_dist_threshold_edges_abs’,
’pose_ref_scoring_dist_edges_rel’, and ’pose_ref_scoring_dist_edges_abs’.
Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. InitialPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Initial pose of the surface model in the scene.
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Minimum score of the returned poses.
Default: 0
Restriction: MinScore >= 0
. ReturnResultHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable returning a result handle in SurfaceMatchingResultID.
Default: ’false’
List of values: ReturnResultHandle ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_rel’,
’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’, ’3d_edge_min_amplitude_rel’,
’3d_edge_min_amplitude_abs’, ’viewpoint’, ’3d_edges’, ’use_3d_edges’, ’use_view_based’,
’use_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.03, 0.05, 0.1}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D pose of the surface model in the scene.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Score of the found instances of the model.
. SurfaceMatchingResultID (output_control) . . . . . . . . . . . . . surface_matching_result(-array) ; handle
Handle of the matching result, if enabled in ReturnResultHandle.
Result
refine_surface_model_pose returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 151

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_surface_model, create_surface_model, get_surface_model_param,
find_surface_model, edges_object_model_3d
Possible Successors
get_surface_matching_result, clear_surface_matching_result,
clear_object_model_3d
Alternatives
find_surface_model, refine_surface_model_pose_image, find_surface_model_image
See also
create_surface_model, find_surface_model, refine_surface_model_pose_image
Module
3D Metrology

refine_surface_model_pose_image ( Image : : SurfaceModelID,


ObjectModel3D, InitialPose, MinScore, ReturnResultHandle,
GenParamName, GenParamValue : Pose, Score,
SurfaceMatchingResultID )

Refine the pose of a surface model in a 3D scene and in images.


The operator refine_surface_model_pose_image refines the approximate pose InitialPose of the
surface model SurfaceModelID in the 3D scene comprised of the 3D surface in ObjectModel3D and the
images of the scene in Image. Note that the number of images passed in Image must correspond to the number
of cameras set with set_surface_model_param. Note also that the surface model must have been created
by create_surface_model with the parameter ’train_3d_edges’ enabled.
The refinement simultaneously optimizes the alignment of the model with the 3D scene as well as the alignment
of the reprojected edges with edges in the passed images. The domain of the images is ignored.
In addition to the parameters documented in refine_surface_model_pose,
refine_surface_model_pose_image also supports the generic parameters ’min_contrast’ and
’max_deformation’, documented in find_surface_model_image.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / uint2
Images of the scene.
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. InitialPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Initial pose of the surface model in the scene.
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Minimum score of the returned poses.
Default: 0
Restriction: MinScore >= 0
. ReturnResultHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable returning a result handle in SurfaceMatchingResultID.
Default: ’false’
List of values: ReturnResultHandle ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_rel’,

HALCON 24.11.1.0
152 CHAPTER 3 3D MATCHING

’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’, ’max_deformation’, ’min_contrast’,


’3d_edge_min_amplitude_rel’, ’3d_edge_min_amplitude_abs’, ’viewpoint’, ’3d_edges’, ’use_3d_edges’,
’use_view_based’, ’use_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.03, 0.05, 0.1}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D pose of the surface model in the scene.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Score of the found instances of the model.
. SurfaceMatchingResultID (output_control) . . . . . . . . . . . . . surface_matching_result(-array) ; handle
Handle of the matching result, if enabled in ReturnResultHandle.
Result
refine_surface_model_pose_image returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_surface_model, create_surface_model, get_surface_model_param,
find_surface_model, edges_object_model_3d
Possible Successors
get_surface_matching_result, clear_surface_matching_result,
clear_object_model_3d
Alternatives
find_surface_model, refine_surface_model_pose, find_surface_model_image
See also
create_surface_model, find_surface_model, refine_surface_model_pose
Module
3D Metrology

serialize_surface_model (
: : SurfaceModelID : SerializedItemHandle )

Serialize a surface_model.
serialize_surface_model serializes the data of a surface model (see fwrite_serialized_item
for an introduction of the basic principle of serialization). The same data that is written in a file by
write_surface_model is converted to a serialized item. The surface model is defined by the handle
SurfaceModelID. The serialized surface model is returned by the handle SerializedItemHandle and
can be deserialized by deserialize_surface_model.
Parameters

. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle


Handle of the surface model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 153

Result
If the parameters are valid, the operator serialize_surface_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
read_surface_model, create_surface_model, get_surface_model_param
Possible Successors
clear_surface_model, fwrite_serialized_item, send_serialized_item,
deserialize_surface_model
See also
create_surface_model, read_surface_model, write_surface_model
Module
3D Metrology

set_surface_model_param ( : : SurfaceModelID, GenParamName,


GenParamValue : )

Set parameters and properties of a surface model.


The operator set_surface_model_param sets parameters and properties of the surface model
SurfaceModelID. The surface model must have been created by create_surface_model or
read_surface_model. The names of the desired properties are passed in the generic parameter
GenParamName, the corresponding values are passed in GenParamValue.
The possible values for GenParamName are listed below.

• Defining cameras for image-based refinement. The following parameters allow to set and clear cam-
era parameters and poses. Those are used by the operators find_surface_model_image and
refine_surface_model_pose_image to project the surface model into the passed image.
Note that the camera parameters must be set before the camera pose.
’camera_parameter’:
’camera_parameter X’: Sets the camera parameters for camera number X, where X is a zero-based index for
the cameras. If not given, X defaults zero (first camera). The camera parameters are used by the operators
find_surface_model_image and refine_surface_model_pose_image, which use the
images corresponding to the camera for the 3D pose refinement. Cameras must be added in increasing
order.
’camera_pose’:
’camera_pose X’: Sets the camera pose for camera number X, where X is a zero-based index for the cameras.
If not given, X defaults zero (first camera). The pose defaults to the zero-pose [0,0,0,0,0,0,0] when
adding a new camera with ’camera_parameter’. This usually means that camera and 3D sensor have the
same point of origin.
’clear_cameras’: Removes all previously set cameras from the surface model.
• Defining Object Symmetries. The following parameters can be used to define symmetries of the 3D object
which was used for the creation of the surface model. If the 3D object is symmetric, that information can be
used to speed up the surface-based matching. Note that for surface models created with the ’train_3d_edges’
parameter enabled, no symmetries can be set.
By default, no symmetry is active.
Note that for performance reasons, when changing the symmetry with any of the parameters below, certain
internal data structures of the surface model are re-created, which can take a few seconds.

HALCON 24.11.1.0
154 CHAPTER 3 3D MATCHING

’symmetry_axis_direction’: Set the direction of the symmetry axis of the model. GenParamValue must be
a tuple with three numbers, containing the x-, y- and z-value of the axis direction. The model is modified
to use this symmetry information for speeding up the matching process.
To remove the symmetry information, pass an empty tuple in GenParamValue. Note that either a
symmetry axis or symmetry poses can be set, but not both.

An object (cylinder) with the symmetry axis direction [0,0,1].


In case that ’symmetry_axis_direction’ is used in combination with a restriction of the pose range
as described below, the value of ’symmetry_axis_direction’ is also used as if set with the parameter
’pose_restriction_allowed_axis_direction’.
’symmetry_axis_origin’: Set a point on the symmetry axis of the model. GenParamValue must be a
tuple with three numbers, which represent a point in model coordinates that lies on the symmetry
axis of the model. This parameter is optional and defaults to the center of the model as returned by
get_surface_model_param.
In case that ’symmetry_axis_origin’ is used in combination with a restriction of the pose range
as described below, the value of ’symmetry_axis_origin’ is also used as if set with the parameter
’pose_restriction_allowed_axis_origin’.
’symmetry_poses’: Set one or more symmetry poses of the model (see create_pose). The model must
be identical when transformed with any of those poses. The model is modified to use this symmetry
information for speeding up the matching process.
When setting one or more symmetry poses, set_surface_model_param will internally create all
poses that can be created by chaining and inverting the passed poses. To obtain all internally created
poses, use get_surface_model_param with the argument ’symmetry_poses_all’. If more than
100 poses are created internally, an error is returned, which indicates that the passed symmetry poses are
invalid.
To remove the symmetry poses, pass an empty tuple in GenParamValue. Note that either a symmetry
axis or symmetry poses can be set, but not both.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 155

An object with a discontinuous symmetry. The symmetry pose for this object is [0,0,0, 0,0,360.0/5, 0].
• Restrict the pose range. The following parameters can be used to restrict the range of rotations in which
the surface model is searched for by find_surface_model, or the allowed range of rotations for the
refinement with refine_surface_model_pose.
By default, no pose range restriction is active.
Note that for performance reasons, when changing the pose range with any of the parameters below, certain
internal data structures of the surface model are re-created, which can take a few seconds.
’pose_restriction_reference_pose’: Set a reference pose of the model. The reference pose can be used along
with ’pose_restriction_max_angle_diff’, to restrict the allowed range of rotations of the model.
If GenParamValue is an empty tuple, any previously set reference pose is cleared and no pose range
restriction will be active for the model.
Otherwise, GenParamValue must be a pose (see create_pose). Note that the transla-
tion part of the pose is ignored. Also note that both ’pose_restriction_reference_pose’ and
’pose_restriction_max_angle_diff’ must be set in order for the pose restriction to be active.
’pose_restriction_max_angle_diff’: Set by how much the rotation of a pose found with
find_surface_model or refined with refine_surface_model_pose may deviate from the
rotation set with ’pose_restriction_reference_pose’, in radians.
If GenParamValue is an empty tuple, any previously set maximum deviation angle is cleared and no
pose range restriction will be active for the model.
Otherwise, GenParamValue must be an angle, which indicates by how much the rotations of a de-
tected pose ’P’ and the reference pose ’R’ set with ’pose_restriction_reference_pose’ may differ. The
comparison is performed for every model point using the formula 6 (Rv, P v) ≤ max_angle_diff ,
where v is the 3D point vector.
’pose_restriction_allowed_axis_direction’: Set an axis for which rotations are ignored when evaluating
the pose range (see ’pose_restriction_reference_pose’ and ’pose_restriction_max_angle_diff’). If
GenParamValue is an empty tuple, any previously set axis is cleared.
Otherwise, GenParamValue must contain a tuple of three numbers which are the direction of the axis
in model coordinates.
If such an axis is set, then a pose is considered to be within the allowed range if the angle between the axis
in the reference pose and the compared pose is smaller than the allowed angle, using 6 (R axis, P axis) ≤
max_angle_diff .
’pose_restriction_allowed_axis_origin’: Set a point on the allowed rotation axis of the model.
GenParamValue must be a tuple with three numbers, which represent a point in model coordinates
that lies on the symmetry axis of the model. This parameter is optional and defaults to the center of the
model as returned by get_surface_model_param.
’pose_restriction_filter_final_poses_only’: This flag allows to switch between two different modes for the
pose range restriction.
If GenParamValue is ’false’ (default), poses outside the defined pose range are removed early
in the matching process. Use this setting if the object pose in the scene is always within the de-

HALCON 24.11.1.0
156 CHAPTER 3 3D MATCHING

fined rotation range, but the object is sometimes found with incorrect rotations. Note that with
this setting, find_surface_model might return poses that the algorithm considers to be lo-
cally suboptimal, because the locally more optimal poses are outside the allowed pose range. Also
note that with this setting, the pose restriction is observed strictly. When passing an input pose to
refine_surface_model_pose that is outside the allowed pose range, it will be transformed to be
within the allowed pose range.
If GenParamValue is ’true’, only the final poses are filtered before returning them. This allows
removing poses that are valid object poses, but are not needed by the application because, for example,
the object cannot be picked up by the robot in a certain orientation. Note that in this setting, less poses
than requested might be returned by find_surface_model if one or more of the final poses are
outside the allowed pose range.
• Modifying self-similarities. The following parameters can be used to adapt the optimization regarding self-
similar poses, i.e., poses under which the model is almost symmetric. The parameters can only be set if the
parameter ’train_self_similar_poses’ was activated during the call of create_surface_model.
Note that for performance reasons, when changing the self-similarity search with any of the parameters below,
certain internal data structures of the surface model are re-created, which can take a few seconds.
’self_similar_poses’: Set the self-similar poses of the model. Those are poses under which the model is very
similar to itself and which can be confused during search.
find_surface_model will find such poses automatically if the parameter ’use_self_similar_poses’
is activated. The poses can be obtained with get_surface_model_param. If the automatically
determined poses are not sufficient to resolve self-similarities, the self-similar poses can be adapted with
this parameter. It is usually not recommended to modify this parameter.
GenParamValue must contain a list of poses. The identity pose will automatically be added to the list
of poses, if it is not already contained in it.

Attention
Note that in some cases, if this operator encounters an error condition while modifying the surface model, such
as an out-of-memory error, the model might be left in an inconsistent, partly changed state. In such cases, it is
recommended to clear the surface model and to no longer use it.
This does not apply to error codes due to invalid parameters, which are checked before performing any model
modification.
Also note that setting some of the options requires re-generation of internal data structures and can take as long as
the original create_surface_model.
Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the parameter.
Default: ’camera_parameter’
List of values: GenParamName ∈ {’camera_parameter’, ’camera_pose’, ’clear_cameras’,
’symmetry_axis_direction’, ’symmetry_axis_origin’, ’symmetry_poses’, ’pose_restriction_reference_pose’,
’pose_restriction_max_angle_diff’, ’pose_restriction_allowed_axis_direction’,
’pose_restriction_allowed_axis_origin’, ’pose_restriction_filter_final_poses_only’, ’self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / string / integer
Value of the parameter.
Suggested values: GenParamValue ∈ {’true’, ’false’, [], [0,0,0,0,0,0,0], [0,0,1]}
Result
set_surface_model_param returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

HALCON/HDevelop Reference Manual, 2024-11-13


3.6. SURFACE-BASED 157

This operator modifies the state of the following input parameter:


• SurfaceModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_surface_model, read_surface_model, get_surface_model_param
Possible Successors
find_surface_model, refine_surface_model_pose, write_surface_model,
find_surface_model_image, refine_surface_model_pose_image
See also
create_surface_model, get_surface_model_param
Module
3D Metrology

write_surface_model ( : : SurfaceModelID, FileName : )

Write a surface model to a file.


The operator write_surface_model writes a surface model to the file FileName. The file can be read again
with read_surface_model. The default HALCON file extension for the surface model (SFM) file is ’sfm’.
Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .sfm
Result
write_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct and the HALCON process has
write permission to the file. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
read_surface_model, create_surface_model, get_surface_model_param
Possible Successors
clear_surface_model
See also
create_surface_model, read_surface_model
Module
3D Metrology

HALCON 24.11.1.0
158 CHAPTER 3 3D MATCHING

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 4

3D Object Model

4.1 Creation

clear_object_model_3d ( : : ObjectModel3D : )

Free the memory of a 3D object model.


The operator clear_object_model_3d frees the memory of a 3D object model that was previously created.
After calling clear_object_model_3d, the model can no longer be used. The handle ObjectModel3D
becomes invalid.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
Result
If the handle of the model is valid, the operator clear_object_model_3d returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• ObjectModel3D
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Module
3D Metrology

copy_object_model_3d ( : : ObjectModel3D,
Attributes : CopiedObjectModel3D )

Copy a 3D object model.


A 3D object model consists of a set of attributes. The operator copy_object_model_3d creates a new
3D object model and copies the selected attributes of the input 3D object model to this new output 3D object

159
160 CHAPTER 4 3D OBJECT MODEL

model. The input 3D object model is defined by a handle ObjectModel3D. The operator returns the handle
CopiedObjectModel3D of the new 3D object model. The operator can be used to save memory space by
removing not needed attributes. Access to the attributes of the 3D object model is possible, e.g., with the operator
get_object_model_3d_params.
The parameter Attributes determines which attributes should be copied. In addition, attributes can be ex-
cluded from copying by using the prefix ~. In order to remove attributes from a 3D object model, the operator
remove_object_model_3d_attrib can be used instead.
Note that because a 3D object model itself consists of a set of attributes, even the point coordinates are an attribute
of the model. This means, that at least this one attribute must be selected for copy_object_model_3d else
the object model to be copied would be empty. So if only a 3D object model representing a point cloud shall be
copied without further attributes, Attributes must be set to ’point_coord’. If an attribute to be copied is not
available or no attribute is selected, an exception is raised.
The following values for the parameter Attributes are possible:

’point_coord’: This value specifies that the attribute with the 3D point coordinates is copied.
’point_normal’: This value specifies that the attribute with the 3D point normals and the attribute with the 3D
point coordinates are copied.
’triangles’: This value specifies that the attribute with the face triangles and attribute with the 3D point coordinates
are copied.
’polygons’: This value specifies that the attribute with the face polygons and the attribute with the 3D point coor-
dinates are copied.
’lines’: This value specifies that the attribute with the lines and the attribute with the 3D point coordinates are
copied.
’xyz_mapping’: This value specifies that the attribute with the mapping to image coordinates and the attribute with
the 3D point coordinates are copied.
’extended_attribute’: This value specifies that all extended attributes are copied. If it is necessary to copy further
attributes that are related to the extended attributes, these attributes are copied, too. These further attributes
could be, e.g., 3D point coordinates, face triangles, face polygons, or lines.
’primitives_all’: This value specifies that the attribute with the parameters of the primitive (including an empty
primitive) is copied (e.g., obtained from the operator fit_primitives_object_model_3d).
’primitive_plane’: This value specifies that the attribute with the primitive plane is copied (e.g., obtained from the
operator fit_primitives_object_model_3d).
’primitive_sphere’: This value specifies that the attribute with the primitive sphere is copied (e.g., obtained from
the operator fit_primitives_object_model_3d).
’primitive_cylinder’: This value specifies that the attribute with the primitive cylinder is copied (e.g., obtained
from the operator fit_primitives_object_model_3d).
’primitive_box’: This value specifies that the attribute with the primitive cylinder is copied.
’shape_based_matching_3d_data’: This value specifies that the attribute with the prepared shape model for shape-
based 3D matching is copied.
’distance_computation_data’: This value specifies that the attribute with the distance computation data structure
is copied. The distance computation data can be created with prepare_object_model_3d, and can
be used with distance_object_model_3d. If this attribute is selected, then the corresponding target
data attribute of the distance computation is copied as well. For example, if the distance computation was
prepared for triangles, the triangles and the vertices are copied.
’surface_based_matching_data’: This value specifies that the data for surface based matching are copied. The
attributes with the 3D point coordinates and the attribute with the point normals are copied. If the attribute
with point normals is not available, the attribute with the mapping from the 3D point coordinates to the
image coordinates is copied. If the attribute with the mapping from the 3D point coordinates to the image
coordinates is not available, the attribute with the face triangles is copied. If the attribute with face triangles
is not available, too, the attribute with the face polygons is copied. If none of these attributes is available, an
exception is raised.
’segmentation_data’: This value specifies that the data for a 3D segmentation is copied. The attributes with the 3D
point coordinates and the attribute with the face triangles are copied. If the attribute with the face triangles
is not available, the attribute with the mapping from the 3D point coordinates to the image coordinates is
copied. If none of these attributes is available, an exception is raised.

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 161

’score’: This value specifies that the attribute with the scores and the attribute with the 3D point coordinates are
copied. Scores may be obtained from the operator reconstruct_surface_stereo.
’red’: This value specifies that the attribute containing the red color and the attribute with the 3D point coordinates
are copied.
’green’: This value specifies that the attribute containing the green color and the attribute with the 3D point coor-
dinates are copied.
’blue’: This value specifies that the attribute containing the blue color and the attribute with the 3D point coordi-
nates are copied.
’original_point_indices’: This value specifies that the attribute with the original point indices and the attribute
with the 3D point coordinates are copied. Original point indices may be obtained from the operator
triangulate_object_model_3d.
’all’: This value specifies that all available attributes are copied. That is, the attributes are the point coordinates,
the point normals, the face triangles, the face polygons, the mapping to image coordinates, the shape model
for matching, the parameter of a primitive, and the extended attributes.

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the input 3D object model.
. Attributes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Attributes to be copied.
Default: ’all’
List of values: Attributes ∈ {’point_coord’, ’point_normal’, ’triangles’, ’polygons’, ’xyz_mapping’,
’extended_attribute’, ’shape_based_matching_3d_data’, ’primitives_all’, ’primitive_plane’,
’primitive_sphere’, ’primitive_cylinder’, ’primitive_box’, ’surface_based_matching_data’,
’segmentation_data’, ’distance_computation_data’, ’score’, ’red’, ’green’, ’blue’, ’all’,
’original_point_indices’}
. CopiedObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the copied 3D object model.
Result
copy_object_model_3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an ex-
ception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
get_object_model_3d_params
See also
remove_object_model_3d_attrib, set_object_model_3d_attrib
Module
3D Metrology

deserialize_object_model_3d (
: : SerializedItemHandle : ObjectModel3D )

Deserialize a serialized 3D object model.


deserialize_object_model_3d deserializes a 3D object model that was serialized by
serialize_object_model_3d (see fwrite_serialized_item for an introduction of the basic

HALCON 24.11.1.0
162 CHAPTER 4 3D OBJECT MODEL

principle of serialization). The serialized 3D object model is defined by the handle SerializedItemHandle.
The deserialized values are stored in an automatically created 3D object model with the handle ObjectModel3D.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
Result
If the parameters are valid, the operator deserialize_object_model_3d returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
write_object_model_3d, fread_serialized_item, receive_serialized_item,
serialize_object_model_3d
Possible Successors
affine_trans_object_model_3d, object_model_3d_to_xyz, prepare_object_model_3d
Alternatives
xyz_to_object_model_3d
See also
write_object_model_3d, clear_object_model_3d
Module
3D Metrology

gen_box_object_model_3d ( : : Pose, LengthX, LengthY,


LengthZ : ObjectModel3D )

Create a 3D object model that represents a box.


gen_box_object_model_3d creates a box-shaped 3D primitive, i.e., a 3D object model that represents a box.
The box is specified by a Pose and the side lengths LengthX, LengthY, and LengthZ along the respective
axis of the pose. The handle of the resulting 3D object model is returned by parameter ObjectModel3D.
Parameter Broadcasting
This operator supports parameter broadcasting. This means that each parameter can be given as a tuple of length 1
or N. Parameters with tuple length 1 will be repeated internally such that the number of created items is always N.
Parameters
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
The pose that describes the position and orientation of the box. The pose has its origin in the center of the box.
. LengthX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the box along the x-axis.
. LengthY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the box along the y-axis.
Number of elements: LengthY == LengthX
. LengthZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the box along the z-axis.
Number of elements: LengthZ == LengthX
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the resulting 3D object model.

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 163

Result
gen_box_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
smallest_bounding_box_object_model_3d
Possible Successors
get_object_model_3d_params, sample_object_model_3d, clear_object_model_3d
See also
gen_cylinder_object_model_3d, gen_sphere_object_model_3d,
gen_sphere_object_model_3d_center, gen_plane_object_model_3d
Module
3D Metrology

gen_cylinder_object_model_3d ( : : Pose, Radius, MinExtent,


MaxExtent : ObjectModel3D )

Create a 3D object model that represents a cylinder.


gen_cylinder_object_model_3d creates a cylinder-shaped 3D primitive, i.e., a 3D object model that
represents a cylinder. A cylinder is described by its center and the direction of its axis in Pose and by its radius in
Radius. The pose has the origin on the rotation axis of the cylinder and is oriented such that the z-axis is aligned
with the main direction of the cylinder. Additionally, the extensions of the cylinder are given by MinExtent and
MaxExtent. MinExtent and MaxExtent represent the z-coordinates of the lowest and highest points of the
cylinder on the rotation axis. The handle of the 3D object model is returned by the parameter ObjectModel3D.
Parameters
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
The pose that describes the position and orientation of the cylinder.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The radius of the cylinder.
. MinExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Lowest z-coordinate of the cylinder in the direction of the rotation axis.
. MaxExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Highest z-coordinate of the cylinder in the direction of the rotation axis.
Restriction: MinExtent < MaxExtent
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the resulting 3D object model.
Result
gen_cylinder_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
164 CHAPTER 4 3D OBJECT MODEL

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
get_object_model_3d_params, sample_object_model_3d, clear_object_model_3d
See also
gen_sphere_object_model_3d, gen_sphere_object_model_3d_center,
gen_plane_object_model_3d, gen_box_object_model_3d
Module
3D Metrology

gen_empty_object_model_3d ( : : : EmptyObjectModel3D )

Create an empty 3D object model.


gen_empty_object_model_3d creates an empty 3D object model. The handle of the 3D object
model is returned by the parameter EmptyObjectModel3D. Attributes can be added using the operators
set_object_model_3d_attrib or set_object_model_3d_attrib_mod.
Parameters
. EmptyObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the new 3D object model.
Result
gen_empty_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
set_object_model_3d_attrib, set_object_model_3d_attrib_mod
See also
gen_box_object_model_3d, gen_cylinder_object_model_3d,
gen_sphere_object_model_3d, gen_sphere_object_model_3d_center,
gen_plane_object_model_3d
Module
3D Metrology

gen_object_model_3d_from_points ( : : X, Y, Z : ObjectModel3D )

Create a 3D object model that represents a point cloud from a set of 3D points.
gen_object_model_3d_from_points creates a 3D object model that represents a point cloud. The points
are described by x-, y-, and z-coordinates in the parameters X, Y, and Z.
Parameters
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x(-array) ; real
The x-coordinates of the points in the 3D point cloud.
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y(-array) ; real
The y-coordinates of the points in the 3D point cloud.

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 165

. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z(-array) ; real


The z-coordinates of the points in the 3D point cloud.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Result
gen_object_model_3d_from_points returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
get_object_model_3d_params
Possible Successors
connection_object_model_3d, convex_hull_object_model_3d
Alternatives
xyz_to_object_model_3d
See also
gen_box_object_model_3d, gen_sphere_object_model_3d,
gen_cylinder_object_model_3d
Module
3D Metrology

gen_plane_object_model_3d ( : : Pose, XExtent,


YExtent : ObjectModel3D )

Create a 3D object model that represents a plane.


gen_plane_object_model_3d creates a planar 3D primitive, i.e., a 3D object model that represents a plane.
The plane is described by its center and rotation. The normal vector of the plane is aligned to the z-axis of the
rotated coordinate system. The center and the rotation is set with the parameter Pose. Additionally, the plane can
be limited by a polygon, that is defined by points with the coordinates XExtent and YExtent. The handle of
the 3D object model is returned by the parameter ObjectModel3D.
Parameters
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
The center and the rotation of the plane.
Number of elements: Pose == 7
. XExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x(-array) ; real / integer
x coordinates specifying the extent of the plane.
. YExtent (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y(-array) ; real / integer
y coordinates specifying the extent of the plane.
Number of elements: XExtent == YExtent
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Result
gen_plane_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

HALCON 24.11.1.0
166 CHAPTER 4 3D OBJECT MODEL

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
get_object_model_3d_params, sample_object_model_3d, clear_object_model_3d
See also
gen_cylinder_object_model_3d, gen_sphere_object_model_3d,
gen_sphere_object_model_3d_center, gen_box_object_model_3d
Module
3D Metrology

gen_sphere_object_model_3d ( : : Pose, Radius : ObjectModel3D )

Create a 3D object model that represents a sphere.


gen_sphere_object_model_3d creates a sphere-shaped 3D primitive, i.e., a 3D object model that repre-
sents a sphere. A sphere is defined by its center given in Pose and its radius given in Radius. The handle of the
3D object model is returned by the parameter ObjectModel3D.
Parameter Broadcasting
This operator supports parameter broadcasting. This means that each parameter can be given as a tuple of length 1
or N. Parameters with tuple length 1 will be repeated internally such that the number of created items is always N.
Parameters
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
The pose that describes the position of the sphere.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The radius of the sphere.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the resulting 3D object model.
Result
gen_sphere_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
smallest_sphere_object_model_3d
Possible Successors
get_object_model_3d_params, sample_object_model_3d, clear_object_model_3d
Alternatives
gen_sphere_object_model_3d_center
See also
gen_cylinder_object_model_3d, gen_plane_object_model_3d,
gen_box_object_model_3d
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 167

gen_sphere_object_model_3d_center ( : : X, Y, Z,
Radius : ObjectModel3D )

Create a 3D object model that represents a sphere from x,y,z coordinates.


gen_sphere_object_model_3d_center creates a sphere-shaped 3D primitive, i.e., a 3D object model
that represents a sphere. A sphere is defined by its center given in X, Y, and Z, and its radius given in Radius.
The handle of the 3D object model is returned by the parameter ObjectModel3D.
Parameter Broadcasting
This operator supports parameter broadcasting. This means that each parameter can be given as a tuple of length 1
or N. Parameters with tuple length 1 will be repeated internally such that the number of created items is always N.
Parameters
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x(-array) ; real / integer
The x-coordinate of the center point of the sphere.
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y(-array) ; real / integer
The y-coordinate of the center point of the sphere.
. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z(-array) ; real / integer
The z-coordinate of the center point of the sphere.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The radius of the sphere.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the resulting 3D object model.
Result
gen_sphere_object_model_3d_center returns 2 (H_MSG_TRUE) if all parameters are correct. If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
smallest_sphere_object_model_3d
Possible Successors
get_object_model_3d_params, sample_object_model_3d, clear_object_model_3d
Alternatives
gen_sphere_object_model_3d
See also
gen_cylinder_object_model_3d, gen_plane_object_model_3d,
gen_box_object_model_3d
Module
3D Metrology

read_object_model_3d ( : : FileName, Scale, GenParamName,


GenParamValue : ObjectModel3D, Status )

Read a 3D object model from a file.


The operator read_object_model_3d reads a 3D object model from the file FileName and returns a 3D
object model handle in ObjectModel3D.
The operator supports the following file formats:

HALCON 24.11.1.0
168 CHAPTER 4 3D OBJECT MODEL

’om3’: HALCON format for 3D object model. Files with this format can be written by
write_object_model_3d. The default file extension for this format is ’om3’.
’dxf’: AUTOCAD format. HALCON supports only the ASCII version of the format. See below for details about
reading this file format. The default file extension for this format is ’dxf’.
’off’: Object File Format. This is a simple ASCII-based format that can hold 3D points and polygons. The binary
OFF format is not supported. The default file extension for this format is ’off’.
’ply’: Polygon File Format (also Stanford Triangle Format). This is a simple format that can hold 3D points,
point normals, polygons, color information and point-based extended attributes. HALCON supports both
the ASCII and the binary version of the format. If the file to be read contains unsupported information, the
additional data is ignored and only the supported data is read. If the name of a property entry of a ’ply’
file coincides with the name of a standard attribute (see set_object_model_3d_attrib), the property
will preferably be read into the standard attribute. The default file extension for this format is ’ply’.
’obj’: OBJ file format, also ’Wavefront OBJ-Format’. This is an ASCII-based format that can hold 3D points,
polygons, normals, texture coordinates, materials and other information. HALCON supports points (’v’-
lines), point normals (’vn’-lines) and polygonal faces (’f’-lines). Existing point normals are only returned
if there are exactly as many point normals as there are points. Other entities are ignored. The default file
extension for this format is ’obj’.
’stl’,
’stl_binary’,
’stl_ascii’: STL file format, also ’Stereolithography format’, ’SurfaceTesselationLanguage ’, ’StandardTriangula-
tionLanguage’, and ’StandardTesselationLanguage’. This format stores triangles and triangle normals. How-
ever, as triangle normals are not supported by HALCON 3D object models, only triangles are read while the
triangle normals are ignored. Normals are recomputed from the triangles if required. HALCON reads both
the ASCII and the binary version of this format. If ’stl’ is set, HALCON will auto-detect the type. Setting
the type to ’stl_binary’ or ’stl_ascii’ will enforce the corresponding format. The default file extension for this
format is ’stl’.
’step’: STEP file format, also STP or ’Standard for the Exchange of Product Model Data’. This is a complex
format that stores a large variety of geometrical definitions which allows an accurate storage of 3D models.
Due to the limited support for the geometrical structures defined by STEP in HALCON 3D object models,
triangulation is performed on these geometries, resulting in models comprised of triangle meshes. The default
file extensions for this format are ’step’ and ’stp’.
’generic_ascii’: This format can be used to read different ASCII files containing 3D data in tabular form, e.g.
’ptx’, ’pts’, ’xyz’ or ’pcd’. Currently, only point based attributes are supported, no triangles or polygons. The
information for each 3D point is expected to be written in a single line, one point at a time. The file format
must be further specified by setting the generic parameter ’ascii_format’.

When reading a DXF file, the output parameter Status contains information about the number of 3D faces that
were read and, if necessary, warnings that parts of the DXF file could not be interpreted.
The parameter Scale defines the scale of the file. For example, if the parameter is set to ’mm’, all units in the file
are assumed to have the unit ’mm’ and are transformed into the usual HALCON-internal unit ’m’ by multiplication
with 0.001. A value of ’100 mm’ thus becomes ’0.1 m’. Alternatively, a scaling factor can be passed to Scale,
which is multiplied with all coordinate values found in the file. The relation of units to scaling factors is given in
the following table:

Unit Scaling factor


m 1
dm 0.1
cm 0.01
mm 0.001
um, microns 10−6
nm 10−9
km 1000
in 0.0254
ft 0.3048
yd 0.9144

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 169

Note that the parameter Scale is ignored for files of type ’om3’ and ’step’. om3-files are always read without
any scale changes. For step-files, the unit is directly defined in the files, read along with the stored data and used
to scale to the HALCON-internal unit ’m’. For changing the scale manually after reading a 3D object model, use
affine_trans_object_model_3d.
A set of additional optional parameters can be set. The names and values of the parameters are passed in
GenParamName and GenParamValue, respectively. Some of the optional parameters can only be set for a
certain file type. The following values for GenParamName are possible:

’file_type’: Forces a file type. If this parameter is not set, the operator read_object_model_3d tries to auto-
detect the file type using the file ending and the file header. If the parameter is set, the given file is interpreted
as this file format.
List of values: ’om3’, ’dxf’, ’off’, ’ply’, ’obj’, ’stl’, ’step’, ’generic_ascii’.
’convert_to_triangles’: Convert all faces to triangles. If this parameter is set to ’true’, all faces read from the file
are converted to triangles.
Valid for formats: ’dxf’, ’ply’, ’off’, ’obj’.
List of values: ’true’, ’false’.
Default: ’false’.
’invert_normals’: Invert normals and face orientations. If this parameter is set to ’true’, the orientation of all
normals and faces is inverted.
Valid for formats: ’dxf’, ’ply’, ’off’, ’obj’, ’stl’, ’step’, ’generic_ascii’.
List of values: ’true’, ’false’.
Default: ’false’.
’max_approx_error’, ’min_num_points’: DXF-specific parameters (see below).
Valid for formats: ’dxf’.
’max_surface_deviation’: STEP-specific parameter.
Specifies the maximum allowed deviation (in ’m’) from the model surface during the triangulation. A smaller
value will generate a more accurate model but will also increase the reading time and the number of points and
triangles in the resulting model. Set the parameter to ’auto’ in order to estimate it automatically depending
on the size of the model.
Valid for formats: ’step’.
Suggested values: ’auto’, 0.0001, 0.00001.
Default: ’auto’.
Restriction: ’max_surface_deviation’ > 0
’split_level’: STEP-specific parameter.
STEP files can contain definitions of independent model components. With this parameter, each component
can be imported as a HALCON 3D object model. If the parameter is set to 0, the file is imported as a single
model. With 1 the model components are roughly separated from each other, while 2 separates the model
components at a more detailed level.
Valid for formats: ’step’.
List of values: 0, 1, 2.
Default: 0.
’ascii_format’: generic_ascii-specific parameter.
Specifies the format of the ASCII file to be read. As value, a dict containing information about the file content
must be provided. The dict defines the columns to be read and meta-data like the first line number containing
point information. Examples are given at the bottom of the operator reference or in the HDevelop example
read_object_model_3d_generic_ascii.hdev. The following parameters can be set as dict keys:
’columns’: (mandatory) Defines the column attributes in the read file, given as a tuple of
strings. All point-related standard and extended attributes as listed in the reference of
set_object_model_3d_attrib are supported. At least, ’point_coord_x’, ’point_coord_y’
and ’point_coord_z’ must be set. When setting normals, all three components ’point_normal_x’,
’point_normal_y’ and ’point_normal_z’ must be set. Ignoring columns is possible by setting ” at the
according tuple position.
Suggested values: [’point_coord_x’, ’point_coord_y’, ’point_coord_z’], [’point_normal_x’,
’point_normal_y’, ’point_normal_z’], ’red’, ’green’, ’blue’, ’&my_custom_attrib’, ”.

HALCON 24.11.1.0
170 CHAPTER 4 3D OBJECT MODEL

’separator’: (mandatory) Defines the separator between the columns. Currently, whitespace (blanks or
tabs) and semicolon are supported.
List of values: ’ ’, ’;’.
’first_point_line’: (optional) Describes the number of the first line to be read from the file and can e.g. be
used to skip header information. The top line in the file corresponds to ’first_point_line’ 1.
Default: 1.
Restriction: ’first_point_line’ > 0
’last_point_line’: (optional) Describes the number of the last line to be read from the file and can e.g. be
used to skip unsupported information. The top line in the file corresponds to ’last_point_line’ 1. When
’last_point_line’ is set to -1, all lines are read.
Default: -1.
Restriction: ’last_point_line’ >= -1
’comment’: (optional) Describes the start of comments in the read file. Information behind the comment
start are ignored when reading the file.
Suggested values: ’#’, ’*’, ’/’, ’comment’.
Valid for formats: ’generic_ascii’.
’xyz_map_width’: Creates for the read 3D object model a mapping that assigns an image coordinate to each read
3D point, as in xyz_to_object_model_3d. It is assumed that the read file contains the 3D points row-
wise. The passed value is used as width of the image. The height of the image is computed automatically.
If this parameter is set, the read 3D object model can be projected by object_model_3d_to_xyz using
the method ’from_xyz_map’. Only one of the two parameters ’xyz_map_width’ and ’xyz_map_height’ can be
set.
Valid for formats: ’ply’, ’off’, ’obj’, ’generic_ascii’.
Default: -1.
Restriction: ’xyz_map_width’ > 0
’xyz_map_height’: As ’xyz_map_width’, but assuming that the 3D points are aligned column-wise. The
width of the image is computed automatically. Only one of the two parameters ’xyz_map_width’ and
’xyz_map_height’ can be set.
Valid for formats: ’ply’, ’off’, ’obj’, ’generic_ascii’.
Default: -1.
Restriction: ’xyz_map_height’ > 0

Note that in many cases, it is recommended to use the 2D mapping data, if available, for speed
and robustness reasons. This is beneficial especially when using sample_object_model_3d,
surface_normals_object_model_3d, or when preparing a 3D object model for surface-based matching,
e.g., smoothing, removing outliers, and reducing the domain.
The operator read_object_model_3d supports the following DXF entities:

• POLYLINE
– Polyface meshes (Polyline flag 64)
– 3D Polylines (Polyline flag 8,9)
– 2D Polylines (Polyline flag 0)
• LWPOLYLINE
– 2D Polylines
• 3DFACE
• LINE
• CIRCLE
• ARC
• SOLID
• BLOCK
• INSERT

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 171

The two-dimensional linear DXF entities LINE, CIRCLE and ARC are not interpreted as faces. Only if these
elements are extruded, the resulting faces are inserted in the 3D object model. All elements that represent no faces
but lines are added as 3D lines to the 3D object model.
The curved surface of extruded DXF entities of the type CIRCLE and ARC is approximated by planar faces.
The accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’. The parameter ’min_num_points’ defines the minimum number of sampling points that are
used for the approximation of the DXF element CIRCLE or ARC. Note that the parameter ’min_num_points’
always refers to the full circle, even for ARCs, i.e., if ’min_num_points’ is set to 50 and a DXF entity of the
type ARC is read that represents a semi-circle, this semi-circle is approximated by at least 25 sampling points.
The parameter ’max_approx_error’ defines the maximum deviation of the XLD contour from the ideal circle. The
determination of this deviation is carried out in the units used in the DXF file. For the determination of the accuracy
of the approximation both criteria are evaluated. Then, the criterion that leads to the more accurate approximation
is used.
Internally, the following default values are used for the generic parameters:

• ’min_num_points’ = 20
• ’max_approx_error’ = 0.25

To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
One possible way to create a suitable DXF file is to create a 3D model of the object with the CAD program
AutoCAD. Ensure that the surface of the object is modeled, not only its edges. Lines that, e.g., define object
edges, will not be used by HALCON, because they do not define the surface of the object. Once the modeling is
completed, you can store the model in DWG format. To convert the DWG file into a DXF file that is suitable for
HALCON’s 3D matching, carry out the following steps:

• Export the 3D CAD model to a 3DS file using the 3dsout command of AutoCAD. This will triangulate the
object’s surface, i.e., the model will only consist of planes. (Users of AutoCAD 2007 or newer versions can
download this command utility from Autodesk’s web site.)
• Open a new empty sheet in AutoCAD.
• Import the 3DS file into this empty sheet with the 3dsin command of AutoCAD.
• Save the object into a DXF R12 file.

Users of other CAD programs should ensure that the surface of the 3D model is triangulated before it is exported
into the DXF file. If the CAD program is not able to carry out the triangulation, it is often possible to save the 3D
model in the proprietary format of the CAD program and to convert it into a suitable DXF file by using a CAD file
format converter that is able to perform the triangulation.
Parameters

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string


Filename of the file to be read.
Default: ’mvtec_bunny_normals’
Suggested values: FileName ∈ {’mvtec_bunny’, ’glass_mug’, ’bmc_mini’, ’pipe_joint’, ’clamp_sloped’,
’tile_spacer’, ’engine_part_bearing’}
File extension: .off, .ply, .dxf, .om3, .obj, .stl, .step, .stp
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / real / integer
Scale of the data in the file.
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, ’nm’, ’km’, ’in’, ’ft’, ’yd’, 1.0, 0.01, 0.001,
1.0e-6, 0.0254, 0.3048, 0.9144}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’ascii_format’, ’convert_to_triangles’, ’invert_normals’, ’file_type’,
’min_num_points’, ’max_approx_error’, ’max_surface_deviation’, ’split_level’, ’xyz_map_width’,
’xyz_map_height’}

HALCON 24.11.1.0
172 CHAPTER 4 3D OBJECT MODEL

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer


Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’, 1, 0, ’auto’, ’om3’, ’off’, ’ply’, ’dxf’, ’obj’, ’stl’,
’stl_binary’, ’stl_ascii’, ’step’, ’generic_ascii’}
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model.
. Status (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Status information.
Example

* Example how to use file_type generic_ascii and generic parameter ascii_format to read
FileFormat := dict{}
FileFormat.separator := ' '
FileFormat.columns := ['point_coord_x', 'point_coord_y', 'point_coord_z', 'point_normal_
FileFormat.first_point_line := 14
FileFormat.last_point_line := 2273
FileFormat.comment := 'comment'
read_object_model_3d ('glass_mug.ply', 'm', ['file_type', 'ascii_format'], ['generic_asc

Result
The operator read_object_model_3d returns the value 2 (H_MSG_TRUE) if the given parameters are correct,
the file can be read, and the file is valid. If the file format is unknown or cannot be determined, the error 9512 is
raised. If the file is invalid, the error 9510 is raised. If necessary, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
write_object_model_3d
Possible Successors
affine_trans_object_model_3d, object_model_3d_to_xyz, prepare_object_model_3d
Alternatives
xyz_to_object_model_3d
See also
write_object_model_3d, clear_object_model_3d
Module
3D Metrology

remove_object_model_3d_attrib ( : : ObjectModel3D,
Attributes : ObjectModel3DOut )

Remove attributes of a 3D object model.


remove_object_model_3d_attrib copies the 3d object model ObjectModel3D and removes within
this copy the standard and/or extended attributes given in Attributes. The new 3d object model is returned in
ObjectModel3DOut. Doing so does not modify the 3d object model ObjectModel3D but a new model is
created. This is in contrast to the operator remove_object_model_3d_attrib_mod, which modifies the
input model but functions identically otherwise.
If the Attributes do not exist in ObjectModel3D, no exception is raised.

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 173

Standard attributes
The following values for the parameter Attributes are possible:

’point_normal’: This value specifies that the attribute with the 3D point normals and the attribute with the 3D
point coordinates are removed.
’triangles’: This value specifies that the attribute with the face triangles is removed.
’polygons’: This value specifies that the attribute with the face polygon is removed.
’lines’: This value specifies that the attribute with the lines is removed.
’xyz_mapping’: This value specifies that the attribute with the mapping to image coordinates is removed.
’extended_attribute’: This value specifies that all user-defined extended attributes are removed.
’primitives_all’: This value specifies that the attribute with the parameters of the primitive (including an empty
primitive) is removed (e.g., obtained from the operator fit_primitives_object_model_3d).
’primitive_plane’: This value specifies that the attribute with the primitive plane is removed (e.g., obtained from
the operator fit_primitives_object_model_3d).
’primitive_sphere’: This value specifies that the attribute with the primitive sphere is removed (e.g., obtained from
the operator fit_primitives_object_model_3d).
’primitive_cylinder’: This value specifies that the attribute with the primitive cylinder is removed (e.g., obtained
from the operator fit_primitives_object_model_3d).
’primitive_box’: This value specifies that the attribute with the primitive cylinder is removed.
’shape_based_matching_3d_data’: This value specifies that the attribute with the prepared shape model for shape-
based 3D matching is removed.
’distance_computation_data’: This value specifies that the attribute with the distance computation data structure
is removed. The distance computation data can be created with prepare_object_model_3d, and can
be used with distance_object_model_3d.
’all’: This value specifies that all available attributes are removed except for the point coordinates. That is, the
attributes are the point normals, the face triangles, the face polygons, the mapping to image coordinates, the
shape model for matching, the parameter of a primitive, and the extended attributes.

Extended attributes
Extended attributes are attributes, that can be derived from standard attributes by special operators (e.g.,
distance_object_model_3d), or user-defined attributes (set with set_object_model_3d_attrib
or set_object_model_3d_attrib_mod). The extended attributes can be removed by setting their names
in Attributes.
The following predefined extended attributes can be removed:

’original_point_indices’: This value specifies that the attribute with the original point indices is removed. Original
point indices may be obtained from the operator triangulate_object_model_3d.
’score’: This value specifies that the attribute with the scores is removed. Scores may be obtained from the operator
reconstruct_surface_stereo.
’red’: This value specifies that the attribute containing the red color is removed.
’green’: This value specifies that the attribute containing the green color is removed.
’blue’: This value specifies that the attribute containing the blue color is removed.
’edge_dir_x’: This value specifies that the vector for the X axis is removed.
’edge_dir_y’: This value specifies that the vector for the Y axis is removed.
’edge_dir_z’: This value specifies that the vector for the Z axis is removed.
’edge_amplitude’: This value specifies that the vector for the amplitude is removed.

HALCON 24.11.1.0
174 CHAPTER 4 3D OBJECT MODEL

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the input 3D object model.
. Attributes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the attributes to be removed.
Default: ’extended_attribute’
List of values: Attributes ∈ {’point_normal’, ’triangles’, ’lines’, ’polygons’, ’xyz_mapping’,
’shape_based_matching_3d_data’, ’distance_computation_data’, ’primitives_all’, ’primitive_plane’,
’primitive_sphere’, ’primitive_cylinder’, ’primitive_box’, ’extended_attribute’, ’score’, ’red’, ’green’, ’blue’,
’original_point_indices’, ’all’}
. ObjectModel3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Result
If the parameters are valid, the operator remove_object_model_3d_attrib returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
set_object_model_3d_attrib
Possible Successors
get_object_model_3d_params
Alternatives
remove_object_model_3d_attrib_mod
See also
copy_object_model_3d, set_object_model_3d_attrib
Module
3D Metrology

remove_object_model_3d_attrib_mod ( : : ObjectModel3D,
Attributes : )

Remove attributes of a 3D object model.


remove_object_model_3d_attrib_mod removes the standard and/or extended attributes given in
Attributes of a 3D object model ObjectModel3D. Doing so changes the 3D object model. This is in contrast
to the operator remove_object_model_3d_attrib, which creates a new model but functions identically
otherwise.
If the Attributes do not exist in ObjectModel3D, no exception is raised.
For a detailed description of Attributes see operator remove_object_model_3d_attrib.
Attention
remove_object_model_3d_attrib_mod removes Attributes unchecked from the 3D object model.
Special attention must be paid to retain a consistent 3D object model, as most of the operators expect consistent 3D
object models. Furthermore, the mapping of the 3D points to image coordinates should not be removed because it
speeds up the computation of many operators.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the input 3D object model.

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 175

. Attributes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string


Name of the attributes to be removed.
Default: ’extended_attribute’
List of values: Attributes ∈ {’point_normal’, ’triangles’, ’lines’, ’polygons’, ’xyz_mapping’,
’shape_based_matching_3d_data’, ’distance_computation_data’, ’primitives_all’, ’primitive_plane’,
’primitive_sphere’, ’primitive_cylinder’, ’primitive_box’, ’extended_attribute’, ’score’, ’red’, ’green’, ’blue’,
’original_point_indices’, ’all’}
Result
If the parameters are valid, the operator remove_object_model_3d_attrib_mod returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
set_object_model_3d_attrib_mod
Possible Successors
get_object_model_3d_params
Alternatives
remove_object_model_3d_attrib
See also
copy_object_model_3d, set_object_model_3d_attrib_mod
Module
3D Metrology

serialize_object_model_3d (
: : ObjectModel3D : SerializedItemHandle )

Serialize a 3D object model.


serialize_object_model_3d serializes the data of a 3D object model (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data
that is written in a file using the file format ’om3’ of write_object_model_3d is converted to a serialized
item. The 3D object model is defined by the handle ObjectModel3D. The serialized 3D object model is returned
by the handle SerializedItemHandle and can be deserialized by deserialize_object_model_3d.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_object_model_3d returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
176 CHAPTER 4 3D OBJECT MODEL

Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
read_object_model_3d, fwrite_serialized_item, send_serialized_item,
deserialize_object_model_3d
See also
read_object_model_3d
Module
3D Metrology

set_object_model_3d_attrib ( : : ObjectModel3D, AttribName,


AttachExtAttribTo, AttribValues : ObjectModel3DOut )

Set attributes of a 3D object model.


set_object_model_3d_attrib sets the standard attributes or the extended attributes given in
AttribName of a 3D object model ObjectModel3D to the values in AttribValues and returns a 3D
object model with the new attribute values in ObjectModel3DOut. set_object_model_3d_attrib is
identical to set_object_model_3d_attrib_mod with the exception that it creates a new 3D object model
and leaves the original 3D object model unchanged. It is possible to attach the values of extended attributes to
already existing standard attributes of the 3D object model by setting the parameter AttachExtAttribTo. For
standard attributes, AttachExtAttribTo is ignored.
If the attributes in AttribName do not exist, they are created if possible. If already existing attributes are set, the
length of AttribValues must match the existing attribute values. In this case the existing attribute values are
replaced. If extended attributes are attached to already existing standard attributes with AttachExtAttribTo,
the length of AttribValues must match the existing attribute values.
Standard attributes
The following standard attributes can be set:

’point_coord_x’: The x-coordinates of the 3D points are set with AttribValues. If the attribute does not exist,
the x-, y- and z-coordinates must be set with ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’ at once.
The number of x-, y-, and z-coordinates must be identical.
’point_coord_y’: The y-coordinates of the 3D points are set with AttribValues. If the attribute does not exist,
the x-, y- and z-coordinates must be set with ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’ at once.
The number of x-, y-, and z-coordinates must be identical.
’point_coord_z’: The z-coordinates of the 3D points are set with AttribValues. If the attribute does not exist,
the x-, y- and z-coordinates must be set with ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’ at once.
The number of x-, y-, and z-coordinates must be identical.
’point_normal_x’: The x-components of the 3D point normals of the 3D points are set with AttribValues.
If the attribute does not exist, the x-, y- and z-components of 3D point normals must be set with
’point_normal_x’, ’point_normal_y’, and ’point_normal_z’ at once. The number of x-, y-, and z-components
must be identical to the number of 3D points. Note that the given 3D point normals will not be normalized to
a length of 1.
’point_normal_y’: The y-components of the 3D point normals of the 3D points are set with AttribValues.
If the attribute does not exist, the x-, y- and z-components of 3D point normals must be set with
’point_normal_x’, ’point_normal_y’, and ’point_normal_z’ at once. The number of x-, y-, and z-components
must be identical to the number of 3D points. Note that the given 3D point normals will not be normalized to
a length of 1.
’point_normal_z’: The z-components of the 3D point normals of the 3D points are set with AttribValues.
If the attribute does not exist, the x-, y- and z-components of 3D point normals must be set with
’point_normal_x’, ’point_normal_y’, and ’point_normal_z’ at once. The number of x-, y-, and z-components
must be identical to the number of 3D points. Note that the given 3D point normals will not be normalized to
a length of 1.

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 177

’triangles’: The indices of the 3D points that represent triangles are set with AttribValues in the following
order: The first three values of AttribValues (input values 0,1,2) represent the first triangle and contain
the indices of the corresponding 3D points of the triangle corners. The second three values (input values
3,4,5) represent the second triangle etc. The direction of the triangles results from the order of the point
indices.
’polygons’: The indices of the 3D points that represent polygons are set with AttribValues in the following
order: The first value of AttribValues contains the number n of points of the first polygon. The following
values (input values 1,2,..,n) contains the indices of the points of the first polygon. The next value (input
value n+1) contains the number m of the points of the second polygon. The following m values (input values
n+2,n+3,..,n+1+m) contain the indices of the points of the second polygon etc.
’lines’: The indices of the 3D points that represent polylines are set with AttribValues in the following order:
The first value of AttribValues contains the number n of points of the first polyline. The following
values (input values 1,2,..,n) represent the indices of the points of the first polyline. The next value (input
value n+1) contains the number m of points of the second polyline. The following m values (input values
n+2,n+3,..,n+1+m) represent the indices of the points of the second polyline etc. All indices correspond to
already existing 3D points.
’xyz_mapping’: The mapping of 3D points to image coordinates is set with AttribValues in the following
order: The first two values of AttribValues (input value 0 and 1) contain the width and height of the
respective image. The following n values (input values 2,3,..,n+1, with n being the number of 3D points)
represent the row coordinates of the n points given in image coordinates. The next n input values (input
values n+2,n+3,..,n*2+1) represent the column coordinates of the n points in image coordinates. Hence, the
total number of input values is n*2+2.

Extended attributes
Extended attributes are attributes, that can be derived from standard attributes by special operators (e.g.,
distance_object_model_3d), or user-defined attributes. Predefined extended attributes can only be set
separately, for these attributes AttachExtAttribTo will be ignored. The names of user-defined extended
attributes are arbitrary, but must start with the prefix ’&’, e.g., ’&my_attrib’. Extended attributes can have an
arbitrary number of floating point values.
The following predefined extended attributes can be set:

’original_point_indices’: The original points indices of the 3D points are set with AttribValues. The number
of the original points indices must be identical to the number of 3D points.
’score’: The score of a 3D reconstruction of the 3D points are set with AttribValues. Since the score is
evaluated separately for each 3D point, the number of the score-components must be identical to the number
of 3D points.
’red’: The red channel intensities of the 3D points are set with AttribValues. The number of color values
must be identical to the number of 3D points.
’green’: The green channel intensities of the 3D points are set with AttribValues. The number of color values
must be identical to the number of 3D points.
’blue’: The blue channel intensities of the 3D points are set with AttribValues. The number of color values
must be identical to the number of 3D points.
’edge_dir_x’: The x-component of a vector that is perpendicular to the edge direction and the viewing direction.
’edge_dir_y’: The y-component of a vector that is perpendicular to the edge direction and the viewing direction.
’edge_dir_z’: The z-component of a vector that is perpendicular to the edge direction and the viewing direction.
’edge_amplitude’: Contains the amplitude of edge points.

Extended attributes can be attached to already existing standard attributes of the 3D object model by setting the
parameter AttachExtAttribTo. The following values of AttachExtAttribTo are possible:

’object’ or []: If this value is set, the extended attribute specified in AttribName is associated to the 3D object
model as a whole. The number of values specified in AttribValues is not restricted.
’points’: If this value is set, the extended attribute specified in AttribName is associated to the 3D points of
the object model. The number of values specified in AttribValues must be the same as the number of
already existing 3D points.

HALCON 24.11.1.0
178 CHAPTER 4 3D OBJECT MODEL

’triangles’: If this value is set, the extended attribute specified in AttribName is associated to the triangles of
the object model. The number of values specified in AttribValues must be the same as the number of
already existing triangles.
’polygons’: If this value is set, the extended attribute specified in AttribName is associated to the polygons of
the object model. The number of values specified in AttribValues must be the same as the number of
already existing polygons.
’lines’: If this value is set, the extended attribute specified in AttribName is associated to the lines of the object
model. The number of values specified in AttribValues must be the same as the number of already
existing lines.

Attention
If multiple attributes are given in AttribName, AttribValues is divided into sub-tuples of equal length.
Each sub-tuple is then assigned to one attribute. E.g., if AttribName and AttribValues are set to
AttribName := [’&attrib1’,’&attrib2’,’&attrib3’],
AttribValues := [0.0,1.0,2.0,3.0,4.0,5.0],
the following values are assigned to the individual attributes:
’&attrib1’ = [0.0,1.0], ’&attrib2’ = [2.0,3.0], ’&attrib3’ = [4.0,5.0].
Consequently, it is not possible to set multiple attributes of different lengths in one call.
set_object_model_3d_attrib stores the input AttribValues unmodified in the 3D object model.
Therefore, special attention must be paid to the consistency of the input data, as most of the operators expect
consistent 3D object models.
Parameters

. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle


Handle of the input 3D object model.
. AttribName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the attributes.
List of values: AttribName ∈ {’point_coord_x’, ’point_coord_y’, ’point_coord_z’, ’point_normal_x’,
’point_normal_y’, ’point_normal_z’, ’triangles’, ’polygons’, ’lines’, ’xyz_mapping’, ’red’, ’green’, ’blue’,
’score’, ’original_point_indices’}
. AttachExtAttribTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Defines where extended attributes are attached to.
Default: []
List of values: AttachExtAttribTo ∈ {[], ’object’, ’points’, ’polygons’, ’triangles’, ’lines’}
. AttribValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Attribute values.
. ObjectModel3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Result
If the parameters are valid, the operator set_object_model_3d_attrib returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
gen_empty_object_model_3d
Possible Successors
get_object_model_3d_params
Alternatives
set_object_model_3d_attrib_mod

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 179

See also
copy_object_model_3d, remove_object_model_3d_attrib
Module
3D Metrology

set_object_model_3d_attrib_mod ( : : ObjectModel3D, AttribName,


AttachExtAttribTo, AttribValues : )

Set attributes of a 3D object model.


set_object_model_3d_attrib_mod sets the standard attributes or the extended attributes given
in AttribName of a 3D object model ObjectModel3D to the values in AttribValues.
set_object_model_3d_attrib_mod is identical to set_object_model_3d_attrib, with the ex-
ception that it does not create a new 3D object model but modifies the given one. It is possible to attach the
values of extended attributes to already existing standard attributes of the 3D object model by setting the parameter
AttachExtAttribTo. For standard attributes, AttachExtAttribTo is ignored.
If the attributes in AttribName do not exist, they are created if possible. If already existing attributes are set, the
length of AttribValues must match the existing attribute values. In this case the existing attribute values are
replaced. If extended attributes are attached to already existing standard attributes with AttachExtAttribTo,
the length of AttribValues must match the existing attribute values.
For a detailed description see operator set_object_model_3d_attrib.
Attention
If multiple attributes are given in AttribName, AttribValues is divided into sub-tuples of equal length.
Each sub-tuple is then assigned to one attribute. E.g., if AttribName and AttribValues are set to
AttribName := [’&attrib1’,’&attrib2’,’&attrib3’],
AttribValues := [0.0,1.0,2.0,3.0,4.0,5.0],
the following values are assigned to the individual attributes:
’&attrib1’ = [0.0,1.0], ’&attrib2’ = [2.0,3.0], ’&attrib3’ = [4.0,5.0].
Consequently, it is not possible to set multiple attributes of different lengths in one call.
set_object_model_3d_attrib_mod modifies the content of an already existing 3D object model. The
operator stores the input AttribValues unmodified in the 3D object model. Therefore, special attention must
be paid to the consistency of the input data, as most of the operators expect consistent 3D object models.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. AttribName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the attributes.
List of values: AttribName ∈ {’point_coord_x’, ’point_coord_y’, ’point_coord_z’, ’point_normal_x’,
’point_normal_y’, ’point_normal_z’, ’triangles’, ’polygons’, ’lines’, ’xyz_mapping’, ’red’, ’green’, ’blue’,
’score’, ’original_point_indices’}
. AttachExtAttribTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Defines where extended attributes are attached to.
Default: []
List of values: AttachExtAttribTo ∈ {[], ’object’, ’points’, ’polygons’, ’triangles’, ’lines’}
. AttribValues (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Attribute values.
Result
If the parameters are valid, the operator set_object_model_3d_attrib_mod returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON 24.11.1.0
180 CHAPTER 4 3D OBJECT MODEL

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
Possible Predecessors
gen_empty_object_model_3d
Possible Successors
get_object_model_3d_params
Alternatives
set_object_model_3d_attrib
See also
copy_object_model_3d, remove_object_model_3d_attrib_mod
Module
3D Metrology

union_object_model_3d ( : : ObjectModels3D,
Method : UnionObjectModel3D )

Combine several 3D object models to a new 3D object model.


union_object_model_3d combines the data of all input models in ObjectModels3D to a new 3D object
model that is returned in UnionObjectModel3D.
Overlapping areas in the 3D object models might cause the potential 2D mapping, polygons, or triangles in the
output to be less useful, since they might overlap, too.
The only supported Method is so far ’points_surface’, which combines all points, surfaces and lines into the
output UnionObjectModel3D. Extended Attributes are copied if no holes appear, i.e., if they are present in all
input object models where the standard attribute they are attached to exists.
Attention
union_object_model_3d ignores 3D object models of type 3D primitive and 3D shape model.
Parameters
. ObjectModels3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of input 3D object models.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method used for the union.
Default: ’points_surface’
List of values: Method ∈ {’points_surface’}
. UnionObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Example

gen_object_model_3d_from_points ([0,0,0,0],[1,1,0,0], [0,1,1,0],\


ObjectModel3D1)
gen_object_model_3d_from_points ([1,1,1,1],[1,1,0,0], [0,1,1,0],\
ObjectModel3D2)
get_object_model_3d_params (ObjectModel3D1, 'diameter', DiameterOld)
union_object_model_3d ([ObjectModel3D1,ObjectModel3D2], 'points_surface',\
UnionObjectModel3D)
get_object_model_3d_params (UnionObjectModel3D, 'diameter', DiameterNew)

Result
union_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If there is no attribute
common in all input objects, an exception is raised.
Execution Information

HALCON/HDevelop Reference Manual, 2024-11-13


4.1. CREATION 181

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
get_object_model_3d_params
Possible Successors
connection_object_model_3d, convex_hull_object_model_3d
See also
gen_box_object_model_3d, gen_sphere_object_model_3d,
gen_cylinder_object_model_3d
Module
3D Metrology

write_object_model_3d ( : : ObjectModel3D, FileType, FileName,


GenParamName, GenParamValue : )

Writes a 3D object model to a file.


The operator write_object_model_3d writes the 3D object model ObjectModel3D to the file
FileName. The object model can be read again with read_object_model_3d, or can be imported into
an appropriate CAD program. Please note, that primitives may only be stored in the HALCON format ’om3’.
Should it be necessary to store the primitives in another format, the operator sample_object_model_3d has
to be called beforehand. However, this results in a transformation of the primitives into 3D points and therefore
only corresponds to an approximation of the primitives.
All coordinates are written in meters. If the file is read later using read_object_model_3d, the parameter
Scale must be set to ’m’ to avoid scaling the data.
The parameter FileType determines the type of the file. The following types are supported by this operator:

’om3’: HALCON format for object model 3D. Files with this format can be read by read_object_model_3d.
The default file extension for this format is ’om3’.
’dxf’: AUTOCAD format. See read_object_model_3d for details about reading this file format. The default
file extension for this format is ’dxf’.
’off’: Object File Format. This is an ASCII-based format that can hold 3D points and polygons. The default file
extension for this format is ’off’.
’ply’,
’ply_binary’: Polygon File Format (also Stanford Triangle Format). This is a simple format that can hold 3D
points, point normals, polygons, color information and point-based extended attributes. HALCON supports
the writing of both the ASCII and the binary version of this format. The default file extension for this format
is ’ply’.
’obj’: OBJ file format, also Wavefront OBJ-Format. This is an ASCII-based format that can hold 3D points,
polygons, normals, and triangles, which are stored as polygons. The default file extension for this format is
’obj’.
’stl’,
’stl_binary’,
’stl_ascii’: STL file format, also ’Stereolithography format’, ’SurfaceTesselationLanguage ’, ’StandardTriangula-
tionLanguage’, and ’StandardTesselationLanguage’. This format stores triangles and triangle normals. How-
ever, as triangle normals are not supported by HALCON 3D object models and point normals (which are, for
example, calculated by surface_normals_object_model_3d) are not supported by the STL format,
no normals are written to file. If the 3D object model contains polygons, they are converted to triangles before
before writing them to disc. If the file type is set to ’stl’ or ’stl_binary’, the binary version of STL is written
while ’stl_ascii’ selects the ASCII version. The default file extension for this format is ’stl’.

HALCON 24.11.1.0
182 CHAPTER 4 3D OBJECT MODEL

A set of additional optional parameters can be set. The names and values of the parameters are passed in
GenParamName and GenParamValue, respectively. Some of the optional parameters can only be set for a
certain file type. The following values for GenParamName are possible:

’invert_normals’: Invert normals and face orientation before saving the 3D object model. If this value is set to
’true’, for the formats ’off’, ’ply’, ’obj’, and ’stl’ the orientation of faces (triangles and polygons) is inverted
. For formats that support point normals (’ply’,’obj’), all normals are inverted before writing them to disc.
Note that for the types ’om3’ and ’dxf’ the parameter has no effect.
Valid for formats: ’off’, ’ply’, ’obj’, ’stl’. List of values: ’true’, ’false’.
Default: ’false’.

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. FileType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the file that is written.
Default: ’om3’
List of values: FileType ∈ {’off’, ’ply’, ’ply_binary’, ’dxf’, ’om3’, ’obj’, ’stl’, ’stl_binary’, ’stl_ascii’}
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the file that is written.
File extension: .off, .ply, .dxf, .om3, .obj, .stl
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’invert_normals’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
Result
The operator write_object_model_3d returns the value 2 (H_MSG_TRUE) if the given parameters are cor-
rect and the file can be written. If necessary, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
read_object_model_3d
See also
read_object_model_3d
Module
3D Metrology

4.2 Features

area_object_model_3d ( : : ObjectModel3D : Area )

Calculate the area of all faces of a 3D object model.


area_object_model_3d calculates the area of all faces in a 3D object model. The 3D object model requires
faces or triangles. The resulting area is returned in Area.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 183

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Area (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Calculated area.
Number of elements: Area == ObjectModel3D
Example

gen_box_object_model_3d ([0,0,0,0,0,0,0],3,2,1, ObjectModel3D)


convex_hull_object_model_3d (ObjectModel3D, ObjectModel3DConvexHull)
area_object_model_3d (ObjectModel3DConvexHull, Area)

Result
area_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
connection_object_model_3d, select_points_object_model_3d,
prepare_object_model_3d, convex_hull_object_model_3d
Possible Successors
select_object_model_3d
See also
volume_object_model_3d_relative_to_plane, max_diameter_object_model_3d,
moments_object_model_3d
Module
3D Metrology

distance_object_model_3d ( : : ObjectModel3DFrom, ObjectModel3DTo,


Pose, MaxDistance, GenParamName, GenParamValue : )

Compute the distances of the points of one 3D object model to another 3D object model.
The operator distance_object_model_3d computes the distances of the points in the 3D object
model ObjectModel3DFrom to the points, triangles, polygons, or primitive in the 3D object model
ObjectModel3DTo. The distances are stored as an extended attribute named ’&distance’ in the 3D object model
ObjectModel3DFrom. This attribute can subsequently be queried with get_object_model_3d_params
or be processed with select_points_object_model_3d or other operators that use extended attributes.
The target data (points, triangles, polygons, or primitive) is selected based on the attributes contained in
ObjectModel3DTo. It is selected based on the presence of the data in the following precedence: Primitive,
triangles, polygons, and points. As alternative to this automatic target data selection, the target data type can also
be set with the generic parameter ’distance_to’ (see below). Generic, non-triangular polygons are internally trian-
gulated by the operator before the distance to the resulting triangles is calculated. Thus, calling the operator with
triangulated objects is faster than calling it with objects having different polygon faces.
MaxDistance can be used to limit the range of the distance values to be computed. If MaxDistance is set
to 0, all distances are computed. If MaxDistance is set to another value, points whose distance would exceed
MaxDistance are not processed and set to MaxDistance. Thus, setting MaxDistance to a value different
than 0 can significantly speed up the execution of this operator.
If Pose is a non-empty tuple, it must contain a pose which is applied to the points in ObjectModel3DFrom
before computing the distances. The pose can be inverted using the generic parameter ’invert_pose’ (see below).

HALCON 24.11.1.0
184 CHAPTER 4 3D OBJECT MODEL

Depending on the target data type (points, triangles, or primitive), several methods for computing the dis-
tances are available. Some of these methods compute a data structure on the elements of ObjectModel3DTo
to speed up the distance computation. Those data structures can be precomputed using the operator
prepare_object_model_3d. This allows multiple calls to distance_object_model_3d to re-use the
data structure, thus saving the time to re-compute it for each call. For objects with non-triangular polygon faces,
the operator prepare_object_model_3d can additionally perform the triangulation and save it to the object
to further speed-up the distance_object_model_3d operator. This triangulation is only performed when
the generic parameter ’distance_to’ is set to ’triangles’. Note that this triangulation, contrary to that of the operator
triangulate_object_model_3d, does not clear out the polygons attribute.
When computing the distance to points or to triangles, the operator can optionally return the index of the closest
point or triangle for each point in ObjectModel3DFrom by setting the generic parameter ’store_closest_index’
to ’true’ (see below). The index is stored as extended attribute named ’&closest_index’ in the 3D object model
ObjectModel3DFrom. Note that the closest index can not be computed when using the ’voxel’ method. If a
point’s distance to its closest element exceeds the maximum distance set in MaxDistance, the closest index is
set to -1.
Optionally, signed distances to points, triangles or to a primitive can be calculated. Therefore, the generic parameter
’signed_distances’ has to be set to ’true’. Note that signed distances can not be computed when using the ’voxel’
method in combination with point to point distances.
In the following, the different target types and methods are explained, and their advantages and disadvantages are
described. Note that the operator automatically selects a default method depending on the target data type. This
method can be overridden using the generic parameter ’method’.

Distance to points: The following methods are available to compute the distances from points to points:
Linear search: For each point in ObjectModel3DFrom, the distances to all points in
ObjectModel3DTo are computed, and the smallest distance is used. This method requires no
precomputed data structure, and is the fastest for a small number of points in ObjectModel3DTo.
KD-Tree: The points in ObjectModel3DTo are organized in a KD-Tree, which speeds up the search
for the closest point. The construction of the tree is very efficient. The search time is approximately
logarithmic to the number of points in ObjectModel3DTo. However, the search time is not constant,
and can vary significantly depending on the position of the query points in ObjectModel3DFrom.
Voxel: The points in ObjectModel3DTo are organized in a voxel structure. This voxel structure al-
lows searching in almost constant time, i.e., independent from the position of the query points in
ObjectModel3DFrom and the number of points in ObjectModel3DTo.
Note that the preparation of this data structure takes several seconds or minutes. However, it is possible
to perform a precomputation using prepare_object_model_3d on ObjectModel3DTo with
Purpose set to ’distance_computation’.
Distance to triangles: For computing the distances to triangles, the following methods are supported:
Linear search: For each point in ObjectModel3DFrom, the distances to all triangles in
ObjectModel3DTo are computed, and the smallest distance is used. This method requires no
precomputed data structure, and is the fastest for a small number of triangles in ObjectModel3DTo.
KD-Tree: The triangles in ObjectModel3DTo are organized in a KD-Tree, which speeds up the search
for the closest triangle. The construction of the tree is efficient. The search time is approximately log-
arithmic to the number of triangles in ObjectModel3DTo. However, the search time is not constant,
and can vary significantly depending on the position of the query points in ObjectModel3DFrom.
Voxel: The triangles in ObjectModel3DTo are organized in a voxel structure. This voxel structure
allows searching in almost constant time, i.e., independent from the position of the query points in
ObjectModel3DFrom and the number of triangles in ObjectModel3DTo.
Note that the preparation of this data structure takes several seconds or minutes. However, it is possible
to perform a precomputation using prepare_object_model_3d on ObjectModel3DTo with
Purpose set to ’distance_computation’. For creating the voxel data structure, the triangles are sampled.
The corresponding sampling distance can be set with the generic parameters ’sampling_dist_rel’ and
’sampling_dist_abs’.
By default, a relative sampling distance of 0.03 is used. See below for a more detailed description of
the sampling distance. Note that this data structure is only approximate. It is possible that some of the
distances are off by around 10% of the sampling distance. In these cases, the returned distances will
always be larger than the actual distances.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 185

Distance to primitive: Since ObjectModel3DTo can contain only one primitive, the distances from the query
points to this primitive are computed linearly. The creation or usage of a data structure is not possible.
Note that computing the distance to primitive planes fitted with segment_object_model_3d or
fit_primitives_object_model_3d can be slow, since those planes contain a complex con-
vex hull of the points that were used to fit the plane. If only the distance to the plane is re-
quired, and the boundary should be ignored, it is recommended to obtain the plane pose using
get_object_model_3d_params with parameter ’primitive_parameter_pose’ and create a new plane
using gen_plane_object_model_3d.

The following table lists the different target data types, methods, and their properties. The search time is the approx-
imate time per point in ObjectModel3DFrom. N is the number of target elements in ObjectModel3DTo.

Target Data Method Creation Time Approximate Properties


Search Time

points linear 0 O(N ) · No precomputation


· Fastest for small N
· Default for N < 100

points kd-tree O(N log(N )) O(log(N )) · Fast structure creation


· Non-constant search time
· Default for N ≥ 100

points voxel O(N log(N )) O(log(log(N ))) · Slow structure creation


· Very fast search
· Default for
precomputation with
prepare_object_model_3d

triangles linear 0 O(N ) · No precomputation


· Fastest for small N
· Default

triangles kd-tree O(N log(N )) O(log(N )) · Fast structure creation


· Non-constant search time

triangles voxel O(N log(N )) O(log(log(N ))) · Slow structure creation


· Requires sampling distance
· Very fast search
· Small errors possible
· Default for
precomputation with
prepare_object_model_3d

primitive linear 0 O(1)

Additionally to the parameters described above, the following parameters can be set to influence the distance com-
putation. If desired, these parameters and their corresponding values can be specified by using GenParamName
and GenParamValue, respectively. All of the following parameters are optional.

’distance_to’ This parameter can be used to explicitly set the target data to which the distances are computed.

HALCON 24.11.1.0
186 CHAPTER 4 3D OBJECT MODEL

’auto’ (Default) Automatically set the target data. The following list of attributes is queried, and the first
appearing attribute from the list is used as target data: Primitive, Triangle, Point.
’primitive’ Compute the distance to the primitive contained in ObjectModel3DTo.
’triangles’ Compute the distance to the triangles contained in ObjectModel3DTo.
’points’ Compute the distance to the points contained in ObjectModel3DTo.
’method’ This parameter can be used to explicitly set the method to be used for the distance computation. Note
that not all methods are available for all target data types. For the list of possible pairs of target data type and
method, see above.
’auto’ (Default) Use the default method for the used target data type.
’linear’ Use a linear search for computing the distances.
’kd-tree’ Use a KD-Tree for computing the distances.
’voxel’ Use a voxel structure for computing the distances.
’invert_pose’ This parameter can be used to invert the pose given in Pose.
’false’ (Default) The pose is not inverted.
’true’ The pose is inverted.
’output_attribute’ This parameter can be used to set the name of the attribute in which the distances are stored.
By default, the distances are stored in an extended attribute named ’&distance’ in ObjectModel3DFrom.
However, if the same 3D object model is used for different calls of this operator, the result of the previous call
would be overwritten. This can be avoided by changing the name of the extended attribute. Valid extended
attribute names start with a ’&’.
’sampling_dist_rel’, ’sampling_dist_abs’ These parameters are used when computing the distances to triangles
using the voxel method. For this, the triangles need to be sampled. The sampling distance can be set either in
absolute terms, using ’sampling_dist_abs’, or relative to the diameter of the axis aligned bounding box, using
’sampling_dist_rel’. By default, ’sampling_dist_rel’ is set to 0.03. Only one of the two parameters can be set.
The diameter of the axis aligned bounding box can be queried using get_object_model_3d_params.
Note that the creation of the voxel data structure is very time consuming, and is usually performed offline
using prepare_object_model_3d (see above).
’store_closest_index’ This parameter can be used to return the index of the closest point or triangle in the extended
attribute ’&closest_index’.
’false’ (Default) The index is not returned.
’true’ The index is returned.
’signed_distances’ This parameter can be used to calculate signed distances of the points in the 3D object model
ObjectModel3DFrom to the points, triangles or primitive in the 3D object model ObjectModel3DTo.
’false’ (Default) Unsigned distances are returned.
’true’ Signed distances are returned.
Dependent on the available target data (points, triangles or primitive) the following particularities have to be
considered:
Distance to points: The computation of signed distances is only supported for the methods ’kd-tree’ and
’linear’. However, signed distances can only be calculated if point normals are available for the points
in the 3D object model or attached via the operator set_object_model_3d_attrib_mod.
Distance to triangles: Signed distances can be calculated for all methods listed above. The operator returns
a negative distance, if the dot product with the normal vector of the triangle is less than zero. Otherwise,
the distance is positive.
Distance to primitive: When calculating signed distances to a cylindrical, spherical or box-shaped primi-
tive, the points of the 3D object model ObjectModel3DFrom inside the primitive obtain a negative
distance, whereas all others have a positive distance. When calculating signed distances to planes, all
points beneath the plane obtain a negative distance, whereas all others have a positive one.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 187

Parameters
. ObjectModel3DFrom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the source 3D object model.
. ObjectModel3DTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the target 3D object model.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the source 3D object model in the target 3D object model.
Default: []
. MaxDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Maximum distance of interest.
Default: 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic input parameters.
Default: []
List of values: GenParamName ∈ {’distance_to’, ’method’, ’invert_pose’, ’output_attribute’,
’sampling_dist_rel’, ’sampling_dist_abs’, ’signed_distances’, ’store_closest_index’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Values of the generic input parameters.
Default: []
List of values: GenParamValue ∈ {’auto’, ’triangles’, ’points’, ’polygons’, ’primitive’, ’kd-tree’, ’voxel’,
’linear’, ’true’, ’false’}
Result
distance_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an ex-
ception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
prepare_object_model_3d, read_object_model_3d, find_surface_model,
xyz_to_object_model_3d
Possible Successors
get_object_model_3d_params, render_object_model_3d, disp_object_model_3d,
clear_object_model_3d
See also
prepare_object_model_3d
Module
3D Metrology

get_object_model_3d_params ( : : ObjectModel3D,
GenParamName : GenParamValue )

Return attributes of 3D object models.


A 3D object model consists of a set of attributes and meta data. The operator
get_object_model_3d_params allows to access attributes and meta data of the given 3D object
models. The name of the requested attribute or meta data is passed in the generic parameter GenParamName,
the corresponding value is returned in GenParamValue. If a requested attribute or meta data is not available,
an exception is raised. get_object_model_3d_params supports to access several 3D object models
and several attributes at once. Note that the attributes or meta data can have different lengths. Some of the
standard attributes have a defined length as noted in the attribute description below. The length of other attributes
depends on the actual 3D object model, and can be queried by setting the parameter GenParamName to, e.g.,

HALCON 24.11.1.0
188 CHAPTER 4 3D OBJECT MODEL

’num_points’, ’num_triangles’, ’num_polygons’, or ’num_lines’. Thus, to get the length of the standard attribute
’point_coord_x’, set GenParamName to ’num_points’.
Standard attributes
The following standard attributes and meta data can be accessed:

’point_coord_x’: The x-coordinates of the set of the 3D points (length can be queried by ’num_points’).
This attribute is obtained typically from the operator xyz_to_object_model_3d or
read_object_model_3d.
’point_coord_y’: The y-coordinates of the set of the 3D points (length can be queried by ’num_points’).
This attribute is obtained typically from the operator xyz_to_object_model_3d or
read_object_model_3d.
’point_coord_z’: The z-coordinates of the set of the 3D points (length can be queried by ’num_points’).
This attribute is obtained typically from the operator xyz_to_object_model_3d or
read_object_model_3d.
’point_normal_x’: The x-components of 3D point normals of the set of the 3D points (length can be queried by
’num_points’). This attribute is obtained typically from the operator smooth_object_model_3d.
’point_normal_y’: The y-components of 3D point normals of the set of the 3D points (length can be queried by
’num_points’). This attribute is obtained typically from the operator smooth_object_model_3d.
’point_normal_z’: The z-components of 3D point normals of the set of the 3D points (length can be queried by
’num_points’). This attribute is obtained typically from the operator smooth_object_model_3d.
’mapping_row’: The row-components of the 2D mapping of the set of 3D points. (length can be queried by
’num_points’, height of the original image can be queried by ’mapping_size’). This attribute is obtained
typically from the operator xyz_to_object_model_3d.
’mapping_col’: The column-components of the 2D mapping of the set of 3D points. (length can be queried by
’num_points’, width of the original image can be queried by ’mapping_size’). This attribute is obtained
typically from the operator xyz_to_object_model_3d.
’mapping_size’: The size of the original image. A tuple with the two entries width and height is returned.
’triangles’: The indices of the 3D points that represent triangles in the following order: The first three values
(return values 0,1,2) represent the first triangle. The next three values (return values 3,4,5) represent the
second triangle etc. All indices correspond to the coordinates of the 3D points. Access to the coordinates of
the 3D points is possible, e.g., with the generic parameter GenParamName set to the values ’point_coord_x’,
’point_coord_y’, and ’point_coord_z’, respectively. The length of this attribute corresponds to three times the
number of triangles, which can be queried using ’num_triangles’. This attribute is obtained typically from
the operator triangulate_object_model_3d or read_object_model_3d.
’polygons’: The indices of the 3D points that represent polygons in the following order: The first return value
contains the number n of the points of the first polygon. The following values (return values 1,2,..,n) represent
the indices of the points of the first polygon. The next value (return value n+1) contains the number m of the
points of the second polygon. The following m return values (return values n+2,n+3,..,n+1+m) represent the
indices of the points of the second polygon etc. All indices correspond to the coordinates of the 3D points.
Access to the coordinates of the 3D points is possible, e.g., with the generic parameter GenParamName set
to the values ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’, respectively. The number of polygons
per 3D object model can be queried using ’num_polygons’. This attribute is obtained typically from the
operator read_object_model_3d.
’lines’: The indices of the 3D points that represent polylines in the following order: The first return value con-
tains the number n of points of the first polyline. The following values (return values 1,2,..,n) represent
the indices of the points of the first polyline. The next value (return value n+1) contains the number m of
points of the second polyline. The following m values (return values n+2,n+3,..,n+1+m) represent the in-
dices of the points of the second polyline etc. All indices correspond to the coordinates of the 3D points.
Access to the coordinates of the 3D points is possible, e.g., with the generic parameter GenParamName
set to the values ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’, respectively. The number of lines
per 3D object model can be queried using ’num_lines’. This attribute is obtained typically from the operator
intersect_plane_object_model_3d.
’diameter_axis_aligned_bounding_box’: The diameter of the set of 3D points, defined as the length of the diagonal
of the smallest enclosing axis-parallel cuboid (see parameter ’bounding_box1’). This attribute has length 1.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 189

’center’: 3D coordinates of the center of the set of 3D points. These coordinates are the center of the smallest
enclosing axis-parallel cuboid (see parameter ’bounding_box1’). This attribute has length 3. If there are no
3D coordinates in the 3D object model the following rules are valid:
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d) and
there are extensions, the center point between the extensions are returned. If there are no extensions the
translation parameters of the pose are returned.
If the 3D object model is a primitive of type plane (see gen_plane_object_model_3d) and there are
extensions, the center of gravity of the plane is computed from the extensions. If there are no extensions the
translation parameters of the pose are returned.
If the 3D object model is a primitive of type sphere or box (see gen_sphere_object_model_3d or
gen_box_object_model_3d), the center point of the object model is returned.
’primitive_type’: The primitive type (e.g., obtained from the operator
fit_primitives_object_model_3d). The return value of a sphere is ’sphere’. The return
value of a cylinder is ’cylinder’. The return value of a plane is ’plane’. The return value of a box is ’box’.
This attribute has length 1.
’primitive_parameter’: The parameters of the primitive (e.g., obtained from the operator
fit_primitives_object_model_3d). The length of this attribute depends on ’primitive_type’
and is between 4 and 10 for each 3D object model.
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d), the
return values are the (x-, y-, z-)coordinates of the center [x_center, y_center, z_center], the
normed (x-, y-, z-)directions of the main axis of the cylinder [x_axis, y_axis, z_axis], and the ra-
dius [radius] of the cylinder. The order is [x_center, y_center, z_center, x_axis, y_axis,
z_axis, radius].
If the 3D object model is a primitive of type sphere (see gen_sphere_object_model_3d), the return
values are the (x-, y-, z-)coordinates of the center [x_center, y_center, z_center] and the radius
[radius] of the sphere. The order is [x_center, y_center, z_center, radius].
If the 3D object model is a primitive of type plane (see gen_plane_object_model_3d), the 4 pa-
rameters of the hessian normal form are returned, i.e., the unit normal (x-, y-, z-) vector [x, y, z] and the
orthogonal distance (d) of the plane from the origin of the coordinate system. The order is [x, y, z, d]. The
sign of the distance (d) determines the side of the plane on which the origin is located.
If the 3D object model is a primitive of type box (gen_box_object_model_3d), the return values are
the 3D pose (translation, rotation, type of the rotation) and the half edge lengths (length1, length2,
length3) of the box. length1 is the length of the box along the x axis of the pose. length2 is the
length of the box along the y axis of the pose. length3 is the length of the box along the z axis of the
pose. The order is [trans_x, trans_y, trans_z, rot_x, rot_y, rot_z, rot_type, length1,
length2, length3]. For details about 3D poses and the corresponding transformation matrices see the
operator create_pose.
’primitive_parameter_pose’: The parameters of the primitive with format of a 3D pose (e.g., obtained from the
operator fit_primitives_object_model_3d). For all types of primitives the return values are the
3D pose (translation, rotation, type of the rotation). For details about 3D poses and the corresponding trans-
formation matrices see the operator create_pose. The length of this attribute depends on ’primitive_type’
and is between 7 and 10 for each 3D object model.
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d), addi-
tionally, the radius [radius] of the cylinder is returned. The order is [trans_x, trans_y, trans_z,
rot_x, rot_y, rot_z, rot_type, radius].
If the 3D object model is a primitive of type sphere (see gen_sphere_object_model_3d), additionally,
the radius [radius] of the sphere is returned. The order is [trans_x, trans_y, trans_z, rot_x,
rot_y, rot_z, rot_type, radius].
If the 3D object model is a primitive of type plane (see gen_plane_object_model_3d), the order is
[trans_x, trans_y, trans_z, rot_x, rot_y, rot_z, rot_type].
If the 3D object model is a primitive of type box (see gen_box_object_model_3d), additionally the
half edge lengths (length1, length2, length3) of the box are returned. length1 is the length of the
box along the x axis of the pose. length2 is the length of the box along the y axis of the pose. length3
is the length of the box along the z axis of the pose. The order is [trans_x, trans_y, trans_z, rot_x,
rot_y, rot_z, rot_type, length1, length2, length3].
’primitive_pose’: The parameters of the primitive with format of a 3D pose (e.g., obtained from the operator
fit_primitives_object_model_3d). For all types of primitives the return values are the 3D pose

HALCON 24.11.1.0
190 CHAPTER 4 3D OBJECT MODEL

(translation, rotation, type of the rotation). For details about 3D poses and the corresponding transformation
matrices see the operator create_pose. The length of this attribute is 7 for each 3D object model. The
order is [trans_x, trans_y, trans_z, rot_x, rot_y, rot_z, rot_type].
’primitive_parameter_extension’: The extents of the primitive of type cylinder and plane (e.g., obtained from
the operator fit_primitives_object_model_3d). The length of this attribute depends on ’primi-
tive_type’ and can be queried using ’num_primitive_parameter_extension’.
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d), the
return values are the extents (MinExtent, MaxExtent) of the cylinder. They are returned in the order [MinEx-
tent, MaxExtent]. MinExtent represents the length of the cylinder in negative direction of the rotation axis.
MaxExtent represents the length of the cylinder in positive direction of the rotation axis.
If the 3D object model is a primitive of type plane (created using
fit_primitives_object_model_3d), the return value is a tuple of co-planar points regarding
the fitted plane. The order is [x coordinate of point 1, x coordinate of point 2, x coordinate of point 3, ...., y
coordinate of point 1, y coordinate of point 2, y coordinate of point 3, ....]. The coordinate values describe
the support points of a convex hull. This is computed based on the projections of those points on the fitted
plane which contribute to the fitting. If the plane was created using gen_plane_object_model_3d, all
points that were used to create the plane (XExtent, YExtent) are returned.
’primitive_rms’: The quadratic residual error of the primitive (e.g., obtained from the operator
fit_primitives_object_model_3d). This attribute has length 1.
’reference_point’: 3D coordinates of the reference point of the prepared 3D shape model for shape-based 3D
matching. The reference point is the center of the smallest enclosing axis-parallel cuboid (see parameter
’bounding_box1’). This attribute has length 3.
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z). This
attribute has length 6.
’num_points’: The number of points. This attribute has length 1.
’num_triangles’: The number of triangles. This attribute has length 1.
’num_polygons’: The number of polygons. This attribute has length 1.
’num_lines’: The number of polylines. This attribute has length 1.
’num_primitive_parameter_extension’: The number of extended data of primitives. This attribute has length 1.
’has_points’: The existence of 3D points. This attribute has length 1.
’has_point_normals’: The existence of 3D point normals. This attribute has length 1.
’has_triangles’: The existence of triangles. This attribute has length 1.
’has_polygons’: The existence of polygons. This attribute has length 1.
’has_lines’: The existence of lines. This attribute has length 1.
’has_xyz_mapping’: The existence of a mapping of the 3D points to image coordinates. This attribute has length
1.
’has_shape_based_matching_3d_data’: The existence of a shape model for shape-based 3D matching. This at-
tribute has length 1.
’has_distance_computation_data’: The existence of a precomputed data structure for 3D distance computation.
This attribute has length 1. The data structure can be created with prepare_object_model_3d using
the purpose ’distance_computation’. It is used by the operator distance_object_model_3d.
’has_surface_based_matching_data’: The existence of data for the surface-based matching. This attribute has
length 1.
’has_segmentation_data’: The existence of data for a 3D segmentation. This attribute has length 1.
’has_primitive_data’: The existence of a primitive. This attribute has length 1.
’has_primitive_rms’: The existence of a quadratic residual error of a primitive. This attribute has length 1.
’neighbor_distance’:
’neighbor_distance N’: For every point the distance of the N-th nearest point. N must be a positive integer and is
by default 25. For every point, all other points are sorted according to their distance and the distance of the
N-th point is returned.
’num_neighbors X’: For every point the number of neighbors within a distance of at most X.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 191

’num_neighbors_fast X’: For every point the approximate number of neighbors within a distance of at most X.
The distances are approximated using voxels, leading to a faster processing compared to ’num_neighbors’.

Extended attributes
Extended attributes are attributes, that can be derived from standard attributes by special operators (e.g.,
distance_object_model_3d), or user-defined attributes. User-defined attributes can be created by the op-
erator set_object_model_3d_attrib. The following extended attributes and meta data can be accessed:

’extended_attribute_names’: The names of all extended attributes. For each extended attribute name a value is
returned.
’extended_attribute_types’: The type of all extended attributes. For each extended attribute type a value is re-
turned, thereby the values are sorted as the output for the extended attribute names.
’has_extended_attribute’: The existence of at least one extended attribute. For each 3D object model a value is
returned.
’num_extended_attribute’: The number of extended attributes. For each 3D object model a value is returned.
’&attribute_name’: The values stored under a user-defined extended attribute. Note that this name must start with
’&’, e.g., ’&my_attrib’. The data of the requested extended attributes are returned in GenParamValue.
The order in which the data is returned is the same as the order of the attribute names specified in
GenParamName.
’original_point_indices’: Indices of the 3D points in a different 3D object model (length can
be queried by ’num_points’). This attribute is obtained typically from the operator
triangulate_object_model_3d.
’score’: The score of the set of the 3D points (length can be queried by ’num_points’). This attribute is obtained
typically from the operator reconstruct_surface_stereo.
’red’: The red channel of the set of the 3D points (length can be queried by ’num_points’). This attribute is
obtained typically from the operator reconstruct_surface_stereo.
’green’: The green channel of the set of the 3D points (length can be queried by ’num_points’). This attribute is
obtained typically from the operator reconstruct_surface_stereo.
’blue’: The blue channel of the set of the 3D points (length can be queried by ’num_points’). This attribute is
obtained typically from the operator reconstruct_surface_stereo.
’edge_dir_x’: The x-component of a vector that is perpendicular to the edge direction and the viewing direction.
This attribute is obtained typically from the operator edges_object_model_3d
’edge_dir_y’: The y-component of a vector that is perpendicular to the edge direction and the viewing direction.
This attribute is obtained typically from the operator edges_object_model_3d
’edge_dir_z’: The z-component of a vector that is perpendicular to the edge direction and the viewing direction.
This attribute is obtained typically from the operator edges_object_model_3d
’edge_amplitude’: Contains the amplitude of edge points. This attribute is obtained typically from the operator
edges_object_model_3d

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic attributes that are queried for the 3D object model.
Default: ’num_points’
List of values: GenParamName ∈ {’point_coord_x’, ’point_coord_y’, ’point_coord_z’, ’point_normal_x’,
’point_normal_y’, ’point_normal_z’, ’mapping_row’, ’mapping_col’, ’mapping_size’, ’triangles’, ’polygons’,
’lines’, ’diameter_axis_aligned_bounding_box’, ’center’, ’primitive_type’, ’primitive_rms’,
’primitive_parameter’, ’primitive_parameter_pose’, ’primitive_pose’, ’primitive_parameter_extension’,
’reference_point’, ’bounding_box1’, ’num_points’, ’num_triangles’, ’num_polygons’, ’num_lines’,
’num_primitive_parameter_extension’, ’has_points’, ’has_point_normals’, ’has_triangles’, ’has_polygons’,
’has_lines’, ’has_xyz_mapping’, ’has_shape_based_matching_3d_data’, ’has_surface_based_matching_data’,
’has_segmentation_data’, ’has_primitive_data’, ’has_primitive_rms’, ’extended_attribute_names’,
’extended_attribute_types’, ’has_extended_attribute’, ’num_extended_attribute’,
’has_distance_computation_data’, ’red’, ’green’, ’blue’, ’score’, ’neighbor_distance’, ’num_neighbors’,
’num_neighbors_fast’, ’original_point_indices’, ’edge_amplitude’, ’edge_dir_x’, ’edge_dir_y’, ’edge_dir_z’}

HALCON 24.11.1.0
192 CHAPTER 4 3D OBJECT MODEL

. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real


Values of the generic parameters.
Result
The operator get_object_model_3d_params returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, prepare_object_model_3d,
sample_object_model_3d, triangulate_object_model_3d,
intersect_plane_object_model_3d, set_object_model_3d_attrib,
fit_primitives_object_model_3d, gen_plane_object_model_3d,
gen_sphere_object_model_3d, gen_cylinder_object_model_3d,
gen_box_object_model_3d, gen_sphere_object_model_3d_center
Possible Successors
select_object_model_3d, write_object_model_3d, clear_object_model_3d
Module
3D Metrology

max_diameter_object_model_3d ( : : ObjectModel3D : Diameter )

Calculate the maximal diameter of a 3D object model.


max_diameter_object_model_3d calculates the maximal diameter of the 3D object model by calculating
the convex hull of the object and searching for the pair of points on the convex hull with the largest distance.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Diameter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Calculated diameter.
Number of elements: Diameter == ObjectModel3D
Example

gen_object_model_3d_from_points (rand(200), rand(200),\


rand(200), ObjectModel3D)
max_diameter_object_model_3d (ObjectModel3D, Diameter)

Result
max_diameter_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 193

Possible Predecessors
read_object_model_3d, connection_object_model_3d
Possible Successors
select_object_model_3d
See also
volume_object_model_3d_relative_to_plane, area_object_model_3d,
moments_object_model_3d
Module
3D Metrology

moments_object_model_3d ( : : ObjectModel3D,
MomentsToCalculate : Moments )

Calculates the mean or the central moment of second order for a 3D object model.
moments_object_model_3d calculates the mean or the central moment of second order for a 3D
object model. To calculate the mean of the points of the 3D object model, select ’mean_points’ in
MomentsToCalculate. If instead the central moment of second order should be calculated, select ’cen-
tral_moment_2_points’. The results are the variances of the x, y, z, x-y, x-z, and y-z axes. To compute the
three principal axes of the 3D object model select ’principal_axes’ in MomentsToCalculate. The result is a
pose with the mean of the points as center. The coordinate system that corresponds to the pose has the x-axis along
the first principal axis, the y-axis along the second principal axis and the z-axis along the third principal axis.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. MomentsToCalculate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string
Moment to calculate.
Default: ’mean_points’
List of values: MomentsToCalculate ∈ {’mean_points’, ’central_moment_2_points’, ’principal_axes’}
. Moments (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Calculated moment.
Number of elements: Moments == ObjectModel3D
Example

gen_object_model_3d_from_points (rand(200), rand(200),\


rand(200), ObjectModel3D)
moments_object_model_3d (ObjectModel3D, ['mean_points',\
'central_moment_2_points','principal_axes'], \
Moments)

Result
moments_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
read_object_model_3d, connection_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz, select_object_model_3d

HALCON 24.11.1.0
194 CHAPTER 4 3D OBJECT MODEL

See also
volume_object_model_3d_relative_to_plane
Module
3D Metrology

select_object_model_3d ( : : ObjectModel3D, Feature, Operation,


MinValue, MaxValue : ObjectModel3DSelected )

Select 3D object models from an array of 3D object models according to global features.
select_object_model_3d selects 3D object models from an array of 3D object models for which the values
of specified global features lie within a specified range. The list of possible features that may be specified in
Feature are:

’mean_points_x’: The mean x-coordinate of the points in the 3D object model.


’mean_points_y’: The mean y-coordinate of the points in the 3D object model.
’mean_points_z’: The mean z-coordinate of the points in the 3D object model.
’diameter_axis_aligned_bounding_box’: The diameter of the set of 3D points, defined as the length of the diagonal
of the smallest enclosing axis-parallel cuboid.
’diameter_bounding_box’: The diameter of the set of 3D points, defined as the length of the diagonal of the
smallest enclosing oriented cuboid. This feature has a high calculation complexity.
’diameter_object’: The diameter of the set of 3D points, defined as the length of the distance between two points.
’volume’: The volume of the triangulation of the 3D object model over the x-y plane in the coordinate origin. This
corresponds to the default parametrization of volume_object_model_3d_relative_to_plane
with the plane [0,0,0,0,0,0,0]. The plane con not be changed here.
’volume_axis_aligned_bounding_box’: The volume of the smallest enclosing axis-parallel cuboid.
’area’: The area of the triangulation of the 3D object model.
’central_moment_2_x’: The x-value of the second central moment of the 3D object model.
’central_moment_2_y’: The y-value of the second central moment of the 3D object model.
’central_moment_2_z’: The z-value of the second central moment of the 3D object model.
’central_moment_2_xy’: The xy-value of the second central moment of the 3D object model.
’central_moment_2_xz’: The xz-value of the second central moment of the 3D object model.
’central_moment_2_yz’: The yz-value of the second central moment of the 3D object model.
’num_points’: The number of points.
’num_triangles’: The number of triangles.
’num_faces’: The number of faces.
’num_lines’: The number of polylines.
’has_points’: The existence of 3D points.
’has_point_normals’: The existence of 3D point normals.
’has_triangles’: The existence of triangles.
’has_faces’: The existence of faces or polygons.
’has_lines’: The existence of lines.
’has_xyz_mapping’: The existence of a mapping of the 3D points to image coordinates.
’has_shape_based_matching_3d_data’: The existence of a shape model for shape-based 3D matching.
’has_surface_based_matching_data’: The existence of data for the surface-based 3D matching.
’has_segmentation_data’: The existence of data for a 3D segmentation.
’has_primitive_data’: The existence of a 3D primitive.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 195

For all features listed in Feature a minimal and maximal threshold must be specified in MinValue and
MaxValue. This range is then used to select all given 3D object models that fulfill the given conditions. These
are copied to ObjectModel3DSelected. For logical parameters (e.g., ’has_points’, ’has_point_normals’,
...), MinValue and MaxValue can both be set to ’true’ to select all 3D object models that have the respective
attribute or to ’false’ to select all that do not have it. MinValue and MaxValue can be set to ’min’ and ’max’
accordingly to ignore the respective threshold.
The parameter Operation defines the logical operation that is used to combine different features in Feature.
It can be either a logical ’or’ or ’and’.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the available 3D object models to select.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
List of features a test is performed on.
Default: ’has_triangles’
List of values: Feature ∈ {’mean_points_x’, ’mean_points_y’, ’mean_points_z’, ’volume’,
’volume_axis_aligned_bounding_box’, ’central_moment_2_x’, ’central_moment_2_y’,
’central_moment_2_z’, ’central_moment_2_xy’, ’central_moment_2_xz’, ’central_moment_2_yz’,
’diameter_axis_aligned_bounding_box’, ’diameter_bounding_box’, ’diameter_object’, ’area’, ’has_points’,
’has_triangles’, ’has_faces’, ’has_lines’, ’has_xyz_mapping’, ’has_point_normals’,
’has_shape_based_matching_3d_data’, ’has_surface_based_matching_data’, ’has_segmentation_data’,
’has_primitive_data’, ’num_points’, ’num_triangles’, ’num_faces’, ’num_lines’}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Logical operation to combine the features given in Feature.
Default: ’and’
List of values: Operation ∈ {’and’, ’or’}
. MinValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Minimum value for the given feature.
Default: 1
Suggested values: MinValue ∈ {0, 1, 100, 0.1, ’true’, ’false’, ’min’}
. MaxValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Maximum value for the given feature.
Default: 1
Suggested values: MaxValue ∈ {0, 1, 10, 100, 0.1, ’true’, ’false’, ’max’}
. ObjectModel3DSelected (output_control) . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
A subset of ObjectModel3D fulfilling the given conditions.
Example

gen_object_model_3d_from_points (rand(20)-1.0, rand(20)-1.0,\


rand(20)-1.0, ObjectModel3D1)
gen_object_model_3d_from_points (rand(20), rand(20),\
rand(20), ObjectModel3D2)
select_object_model_3d ([ObjectModel3D1, ObjectModel3D2],\
'mean_points_x', 'and', 0, 1, ObjectModel3DSelected)

Result
select_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, select_points_object_model_3d,

HALCON 24.11.1.0
196 CHAPTER 4 3D OBJECT MODEL

connection_object_model_3d, get_object_model_3d_params,
volume_object_model_3d_relative_to_plane, area_object_model_3d,
max_diameter_object_model_3d, moments_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz
See also
volume_object_model_3d_relative_to_plane, area_object_model_3d,
max_diameter_object_model_3d, moments_object_model_3d,
get_object_model_3d_params
Module
3D Metrology

smallest_bounding_box_object_model_3d ( : : ObjectModel3D,
Type : Pose, Length1, Length2, Length3 )

Calculate the smallest bounding box around the points of a 3D object model.
smallest_bounding_box_object_model_3d calculates the smallest bounding box around the points of
a 3D object model. The resulting bounding box is described using its coordinate system (Pose), which is oriented
such that the longest side of the box is aligned with the x-axis, the second longest side is aligned with the y-axis
and the smallest side is aligned with the z-axis. The lengths of the sides are returned in Length1, Length2, and
Length3, in descending order. The box can be either axis-aligned or oriented, which can be chosen by the Type.
The algorithm for ’oriented’ is computationally significantly more costly than the algorithm for ’axis_aligned’,
and returns only an approximation of the oriented bounding box. Note that the algorithm for the oriented bounding
box is randomized and can return a different box for each call.
In order to retrieve the corners of the ’axis_aligned’ box, the operator get_object_model_3d_params can
be used with the parameter ’bounding_box1’.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The method that is used to estimate the smallest box.
Default: ’oriented’
List of values: Type ∈ {’oriented’, ’axis_aligned’}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
The pose that describes the position and orientation of the box that is generated. The pose has its origin in the
center of the box and is oriented such that the x-axis is aligned with the longest side of the box.
. Length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the longest side of the box.
Number of elements: Length1 == ObjectModel3D
. Length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the second longest side of the box.
Number of elements: Length2 == ObjectModel3D
. Length3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the third longest side of the box.
Number of elements: Length3 == ObjectModel3D
Example

gen_object_model_3d_from_points (rand(20), rand(20), rand(20), \


ObjectModel3D)
smallest_bounding_box_object_model_3d (ObjectModel3D, 'oriented', \
Pose, Length1, Length2, Length3)
gen_box_object_model_3d (Pose, Length1, Length2, Length3, ObjectModel3D1)
dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, [ObjectModel3D,ObjectModel3D1], \
[], [], ['alpha_1'], [0.5], [], [], [], PoseOut)

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 197

Result
smallest_bounding_box_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
connection_object_model_3d, simplify_object_model_3d
Possible Successors
gen_box_object_model_3d
See also
smallest_sphere_object_model_3d
Module
3D Metrology

smallest_sphere_object_model_3d ( : : ObjectModel3D : CenterPoint,


Radius )

Calculate the smallest sphere around the points of a 3D object model.


smallest_sphere_object_model_3d calculates the smallest sphere around the points of the 3D ob-
ject model given by ObjectModel3D. The resulting center will be stored as x-, y-, and z-coordinates in
CenterPoint as 3 values representing X, Y, and Z. The Radius is given in Radius.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. CenterPoint (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
x-, y-, and z-coordinates describing the center point of the sphere.
Number of elements: CenterPoint == 3 * ObjectModel3D
. Radius (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The estimated radius of the sphere.
Number of elements: Radius == ObjectModel3D
Example

gen_object_model_3d_from_points (rand(20), rand(20), rand(20),\


ObjectModel3D)
smallest_sphere_object_model_3d(ObjectModel3D, CenterPoint, Radius)
gen_sphere_object_model_3d_center (CenterPoint[0], CenterPoint[1], \
CenterPoint[2], Radius, ObjectModel3D1)
dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, [ObjectModel3D,ObjectModel3D1], \
[], [], ['alpha_1'], [0.5], [], [], [], PoseOut)

Result
smallest_sphere_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON 24.11.1.0
198 CHAPTER 4 3D OBJECT MODEL

• Processed without parallelization.


Possible Predecessors
connection_object_model_3d
Possible Successors
gen_sphere_object_model_3d
See also
smallest_bounding_box_object_model_3d
Module
3D Metrology

volume_object_model_3d_relative_to_plane ( : : ObjectModel3D,
Plane, Mode, UseFaceOrientation : Volume )

Calculate the volume of a 3D object model.


volume_object_model_3d_relative_to_plane calculates the volume under the faces of a 3D object
model relative to a plane. The plane is defined by the x-y plane of the pose given in Plane.
For ObjectModel3D, a triangulation or a list of polygons must be available. With default settings, if the mesh is
watertight and ordered, the operator calculates the actual volume of the 3D object model. To also cover cases where
the mesh is not closed or the faces are not ordered consistently, the calculation of the volume can be influenced
with the parameters Mode and UseFaceOrientation.
How the volume is calculated:
First, the operator calculates the volume of the prisms that are constructed by projecting each face onto the plane.
The individual volumes of the prisms can be positive or negative depending on the orientation of the face (away or
towards the plane) or the location of the face (above or below the plane). This can be controlled with the parameter
UseFaceOrientation.
After that, the volumes of the prisms are added up depending on the parameter Mode.
The volume returned in Volume is the absolute value of the calculated sum.
How to set the parameters:
Mode can be set to the following options:

’signed’ (default) The volumes above and below the plane are added.
’unsigned’ The volume below the plane is subtracted from the volume above the plane.
’positive’ Only faces above the plane are taken into account.
’negative’ Only faces below the plane are taken into account.

UseFaceOrientation can be set to the following options:

’true’ (default) Use the orientation of the faces relative to the plane. A face points away from the plane if the
corner points are ordered clockwise when viewed from the plane. The volume under a face is considered
positive if the orientation of the face is away from the plane. In contrast, it is considered negative if the
orientation of the face is towards the plane.
’false’ The volume under a face is considered positive if the face is located above the plane. In contrast, it is
considered negative if the face is located below the plane.

For example, with the default combination (Mode: ’signed’, UseFaceOrientation: ’true’), you can approxi-
mate the real volume of a closed object. In this case, the Plane is still required, but does not change the resulting
volume.

HALCON/HDevelop Reference Manual, 2024-11-13


4.2. FEATURES 199

Example: (A) Mode: ’signed’,


UseFaceOrientation: ’true’: V = (2 ∗ 3 ∗ 4) + (−2 ∗ 3 ∗ 2) = 24 − 12 = 12 (B) Mode: ’signed’,
UseFaceOrientation: ’false’: V = (2 ∗ 3 ∗ 4) + (2 ∗ 3 ∗ 2) = 24 + 12 = 36 (C) Mode: ’negative’: V = 0

Attention
The calculation of the volume might be numerically unstable in case of a large distance between the plane and the
object (approx. distance > 10000 times the object diameter).
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Plane (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Pose of the plane.
Default: [0,0,0,0,0,0,0]
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Method to combine volumes laying above and below the reference plane.
Default: ’signed’
List of values: Mode ∈ {’positive’, ’negative’, ’unsigned’, ’signed’}
. UseFaceOrientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Decides whether the orientation of a face should affect the resulting sign of the underlying volume.
Default: ’true’
List of values: UseFaceOrientation ∈ {’true’, ’false’}
. Volume (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Absolute value of the calculated volume.
Number of elements: Volume == ObjectModel3D
Example

gen_box_object_model_3d ([0,0,0,0,0,0,0],3,2,1, ObjectModel3D)


convex_hull_object_model_3d (ObjectModel3D, ObjectModel3DConvexHull)
volume_object_model_3d_relative_to_plane (ObjectModel3DConvexHull,\
[0,0,0,0,0,0,0], 'signed',\
'true', Volume)

Result
volume_object_model_3d_relative_to_plane returns 2 (H_MSG_TRUE) if all parameters are cor-
rect. If necessary, an exception is raised.

HALCON 24.11.1.0
200 CHAPTER 4 3D OBJECT MODEL

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, select_points_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz, select_object_model_3d
See also
area_object_model_3d
Module
3D Metrology

4.3 Segmentation

fit_primitives_object_model_3d ( : : ObjectModel3D, GenParamName,


GenParamValue : ObjectModel3DOut )

Fit 3D primitives into a set of 3D points.


The operator fit_primitives_object_model_3d fits a 3D primitive, i.e., a simple 3D shape, into a set
of 3D points given by a 3D object model with the handle ObjectModel3D. The shapes that are available as
3D primitives comprise a cylinder, a sphere, and a plane. As the operator does not perform a segmentation
of the set of 3D points that is contained in the input 3D object model, you have to make sure that the con-
tained 3D points already correspond to a 3D primitive. A segmentation can be performed, e.g., with the operator
segment_object_model_3d.
fit_primitives_object_model_3d returns the handle ObjectModel3DOut for the output 3D object
model, which contains information that concern, e.g., the type and parameters of the fitted 3D primitive. This
information can be queried from the 3D object model with get_object_model_3d_params. Note that the
extent of primitives of the type plane and cylinder can be queried with get_object_model_3d_params, as
well.
The parameters of a cylinder are the (x-, y-, z-)coordinates of the center, the normed (x-, y-, z-)directions of
the main axis of the cylinder, and the radius of the cylinder. The center does not necessarily lie in the cen-
ter of gravity of the cylinder (see the explanation of the parameters MinExtent and MaxExtent of the operator
gen_cylinder_object_model_3d). The sign of the main axis is determined such that the main axis points
towards the half space in which the origin is located. For a sphere the parameters are the (x-, y-, z-)coordinates of
the center and the radius of the sphere. A plane is given by the 4 parameters of the hessian normal form, i.e., the
unit normal (x-, y-, z-) vector and the orthogonal distance of the plane from the origin of the coordinate system.
The sign of the hessian normal form is determined such that the normal vector points towards the side of the plane
on which the origin is located and the distance is not positive.
If no primitive can be fitted to the set of 3D points, the returned object model will not contain a primitive. However,
depending on the parameter values for ’output_point_coord’ and ’output_xyz_mapping’ (see below), the returned
object model is either empty, or contains the 3D points, or contains the 3D points and the mapping from the 3D
points to image coordinates of the input object model ObjectModel3D.
To control the fitting, you can adjust some generic parameters within GenParamName and GenParamValue.
But note that for a lot of applications the default values are sufficient and no adjustment is necessary. The following
values for GenParamName and GenParamValue are possible:

’primitive_type’: The parameter specifies which type of 3D primitive should be fitted into the set of 3D points.
You can specify a specific primitive type by setting ’primitive_type’ to ’cylinder’, ’sphere’, or ’plane’. Then,
only the selected type of 3D primitive is fitted into the set of 3D points. You can also specify a set of specific
3D primitives that should be fitted by setting ’primitive_type’ to a tuple consisting of different primitive types.
If all types of 3D primitives should be fitted, you can set ’primitive_type’ to ’all’. Note that if more than one

HALCON/HDevelop Reference Manual, 2024-11-13


4.3. SEGMENTATION 201

primitive type is selected, only the best fitting 3D primitive, i.e., the 3D primitive with the smallest quadratic
residual error, is returned.
List of values: ’cylinder’, ’sphere’, ’plane’, ’all’
Default: ’cylinder’
’fitting_algorithm’: The parameter specifies the used algorithm for the fitting of the 3D primitive. When fitting
a plane, the results are identical for the different algorithms. If ’fitting_algorithm’ is set to ’least_squares’,
the approach minimizes the quadratic distance between the 3D points and the resulting primitive. If ’fit-
ting_algorithm’ is set to ’least_squares_huber’, the approach is similar to ’least_squares’, but the points are
weighted to decrease the impact of outliers based on the approach of Huber (see below). If ’fitting_algorithm’
is set to ’least_squares_tukey’, the approach is also similar to ’least_squares’, but the points are weighted
and outliers are ignored based on the approach of Tukey (see below).
For ’least_squares_huber’ and ’least_squares_tukey’ a robust error statistics is used to estimate the standard
deviation of the distances from the object points without outliers from the fitting primitive. The Tukey
algorithm removes outliers, whereas the Huber algorithm only damps them, or more precisely, weights them
linearly. In practice, the approach of Tukey is recommended.
List of values: ’least_squares’, ’least_squares_huber’, ’least_squares_tukey’
Default: ’least_squares’
’min_radius’: The parameter specifies the minimum radius of a cylinder or a sphere. If a cylinder or a sphere with
a smaller radius is fitted, the resulting 3D object model is empty. The parameter is ignored when fitting a
plane. The unit is meter.
Suggested values: 0.01, 0.02, 0.1
Default: 0.01
’max_radius’: The parameter specifies the maximum radius of a cylinder or a sphere. If a cylinder or a sphere
with a larger radius is fitted, the resulting 3D object model is empty. The parameter is ignored when fitting a
plane. The unit is meter.
Suggested values: 0.02, 0.04, 0.2
Default: 0.2
’output_point_coord’: The parameter determines if the 3D points used for the fitting are copied to the output 3D
object model. If ’copy_point_coord’ is set to ’true’, the 3D points are copied. If ’copy_point_coord’ is set to
’false’, no 3D points are copied.
List of values: ’true’, ’false’
Default: ’true’
’output_xyz_mapping’: The parameter determines if a mapping from the 3D points to image coordinates is
copied to the output 3D object model. This information is needed, e.g., when using the operator
object_model_3d_to_xyz after the fitting (e.g., for a visualization). If ’output_xyz_mapping’ is set
to ’true’, the image coordinate mapping is copied. Note that the parameter is only valid, if the image coor-
dinate mapping is available in the input 3D object model. Make sure that, if you derive the input 3D object
model by copying it with the operator copy_object_model_3d from a 3D object model that contains
such a mapping, the mapping is copied, too. Furthermore, the parameter is only valid, if the 3D points are
copied to the output 3D object model, which is set with the parameter ’output_point_coord’.
List of values: ’true’, ’false’
Default: ’false’

The minimum number of 3D points that are necessary to fit a plane is three. The minimum number of 3D points
that is necessary to fit a sphere is four. The minimum number of 3D points that is necessary to fit a cylinder is five.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the input 3D object model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Number of elements: GenParamName == GenParamValue
List of values: GenParamName ∈ {’primitive_type’, ’fitting_algorithm’, ’min_radius’, ’max_radius’,
’output_point_coord’, ’output_xyz_mapping’}

HALCON 24.11.1.0
202 CHAPTER 4 3D OBJECT MODEL

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string / real / integer


Values of the generic parameters.
Number of elements: GenParamValue == GenParamName
Suggested values: GenParamValue ∈ {’cylinder’, ’sphere’, ’plane’, ’all’, ’least_squares’,
’least_squares_huber’, ’least_squares_tukey’, 0.01, 0.05, 0.1, 0.2, ’true’, ’false’}
. ObjectModel3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the output 3D object model.
Result
fit_primitives_object_model_3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
xyz_to_object_model_3d, read_object_model_3d
Possible Successors
get_object_model_3d_params, object_model_3d_to_xyz, write_object_model_3d,
clear_object_model_3d
Alternatives
segment_object_model_3d
Module
3D Metrology

reduce_object_model_3d_by_view ( Region : : ObjectModel3D,


CamParam, Pose : ObjectModel3DReduced )

Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given
region.
reduce_object_model_3d_by_view projects the points of ObjectModel3D into the image plane given
by Pose and CamParam and reduces the 3D object model to the points lying inside the region given in Region.
In particular, the points are first transformed with the pose and then projected using the camera parameters. Only
those points that are located inside the specified region are copied to the new 3D object model.
Faces of a mesh are only contained in the output 3D object model if all corner points are within the region.
As alternative to camera parameters and a pose, an XYZ-mapping contained in ObjectModel3D can be used for
the reduction. For this, CamParam must be set to ’xyz_mapping’ or an empty tuple and an empty tuple must be
passed to Pose. In this case, the original image coordinates of the 3D points are used to check if a point is inside
Region.
Attention
Cameras with hypercentric lenses are not supported.
Parameters
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; object
Region in the image plane.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
Suggested values: CamParam ∈ {’xyz_mapping’, []}
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: Pose == 7

HALCON/HDevelop Reference Manual, 2024-11-13


4.3. SEGMENTATION 203

. ObjectModel3DReduced (output_control) . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle


Handle of the reduced 3D object model.
Example

gen_object_model_3d_from_points (200*(rand(100)-0.5), \
200*(rand(100)-0.5), \
200*(rand(100)-0.5), ObjectModel3D)
gen_circle (Circle, 240, 320, 60)
CamParam := ['area_scan_telecentric_division',1,0,1,1,320,240,640,480]
Pose := [0,0,1,0,0,0,0]
reduce_object_model_3d_by_view (Circle, ObjectModel3D, CamParam, \
Pose, ObjectModel3DReduced)
dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, [ObjectModel3D, \
ObjectModel3DReduced], CamParam, Pose, \
['color_0', 'point_size_1'], ['blue',6], \
[], [], [], PoseOut)

Result
reduce_object_model_3d_by_view returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz
See also
select_points_object_model_3d
Module
3D Metrology

segment_object_model_3d ( : : ObjectModel3D, GenParamName,


GenParamValue : ObjectModel3DOut )

Segment a set of 3D points into sub-sets with similar characteristics.


The operator segment_object_model_3d segments a set of 3D points given by a 3D object model with
the handle ObjectModel3D into several sub-sets of neighbored 3D points with similar characteristics like the
same normal orientation or curvature. By default, the operator then tries to fit a 3D primitive, i.e., a simple 3D
shape like a plane, a sphere, or a cylinder, into each of these sub-sets. As result, the operator returns a tuple of
handles for the 3D object models that represent the individual sub-sets of 3D points (ObjectModel3DOut).
Within these 3D object models information is stored that concern, e.g., the success of the fitting and the type and
parameters of the fitted 3D primitive. This information can be queried from the individual 3D object model with
get_object_model_3d_params.
Before calling segment_object_model_3d, the input 3D object model should be prepared for the segmen-
tation using the operator prepare_object_model_3d with the parameter Purpose set to ’segmentation’.
If the input 3D object model is not prepared this way, the operator prepare_object_model_3d is called
internally within segment_object_model_3d to extend the 3D object model with attributes that were not
explicitly but only implicitly contained in the 3D object model.

HALCON 24.11.1.0
204 CHAPTER 4 3D OBJECT MODEL

To control the segmentation and the fitting, you can adjust some generic parameters within GenParamName and
GenParamValue. But note that for a lot of applications the default values are sufficient and no adjustment is
necessary. The following values for GenParamName and GenParamValue are possible:
’max_orientation_diff’: The parameter specifies the maximum angle between the point normals of two neighbored
3D points (in radians) that is allowed so that the two points belong to the same sub-set of 3D points. For a
cylinder or sphere, the parameter value depends on the dimension of the object and on the distance of the
neighbored 3D points. I.e., if the cylinder or sphere has a very small radius or if the 3D points are not very
dense, the value must be chosen higher. For a plane the value is independent from the dimension of the object
and can be set to a small value.
Suggested values: 0.10, 0.15, 0.20
Default: 0.15
’max_curvature_diff’: The parameter specifies the maximum difference between the curvatures of the surface at
the positions of two neighbored 3D points that is allowed so that the two points belong to the same sub-set
of 3D points. The value depends on the noise of the 3D points. I.e., if the noise level of the 3D points is very
high, the value must be set to a higher value. Generally, the number of resulting 3D object models decreases
for a higher value, because more 3D points are merged to a sub-set of 3D points.
Suggested values: 0.03, 0.04, 0.05
Default: 0.05
’min_area’: The parameter specifies the minimum number of 3D points needed for a sub-set of connected 3D
points to be returned by the segmentation. Thus, for a sub-set with fewer points the points are deleted and no
output handle is created.
Suggested values: 1, 10, 100
Default: 100
’fitting’: The parameter specifies whether after the segmentation 3D primitives are fitted into the sub-sets
of 3D points. If ’fitting’ is set to ’true’, which is the default, the fitting is calculated and the
3D object models with the resulting handles contain the parameters of the corresponding 3D prim-
itives. The output parameters of a cylinder, a sphere, or a plane are described with the operator
fit_primitives_object_model_3d. If ’fitting’ is set to ’false’, only a segmentation is performed
and the output 3D object models contain the segmented sub-sets of 3D points. A later fitting can be performed
with the operator fit_primitives_object_model_3d.
List of values: ’false’, ’true’
Default: ’true’
’output_xyz_mapping’: The parameter determines if a mapping from the segmented 3D points to image coordi-
nates is copied to the output 3D object model. This information is needed, e.g., when using the operator
object_model_3d_to_xyz after the segmentation (e.g., for a visualization). If ’output_xyz_mapping’
is set to ’true’, the image coordinate mapping is copied. Note that the parameter is only valid, if the im-
age coordinate mapping is available in the input 3D object model. Make sure that, if you derive the input
3D object model by copying it with the operator copy_object_model_3d from a 3D object model that
contains such a mapping, the mapping is copied, too. Furthermore, the parameter is only valid, if the 3D
points are copied to the output 3D object model, which is set with the parameter ’output_point_coord’. If
’output_xyz_mapping’ is set to ’false’, the image coordinate mapping is not copied.
List of values: ’true’, ’false’
Default: ’false’
’primitive_type’, ’fitting_algorithm’, ’min_radius’, ’max_radius’, ’output_point_coord’: These parameters are
used, if ’fitting’ is set to ’true’, which is the default. The meaning and the use of these parameters is de-
scribed with the operator fit_primitives_object_model_3d.
’surface_check’: The parameter determines whether the surface of a triangulated input object model is checked
regarding its conformity to the expected requirements. If the input 3D object model contains tri-
angles that are topologically invalid an error message is raised. If the triangulation was created
(triangulate_object_model_3d) or edited (e.g., by simplify_object_model_3d) by a HAL-
CON operator, a surface check should not be necessary. The check can be disabled in order to enhance the
runtime by setting ’surface_check’ to ’false’.
List of values: ’true’, ’false’
Default: ’true’

HALCON/HDevelop Reference Manual, 2024-11-13


4.3. SEGMENTATION 205

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the input 3D object model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Number of elements: GenParamName == GenParamValue
List of values: GenParamName ∈ {’max_orientation_diff’, ’max_curvature_diff’, ’min_area’,
’primitive_type’, ’fitting_algorithm’, ’min_radius’, ’max_radius’, ’output_point_coord’,
’output_xyz_mapping’, ’surface_check’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string / real / integer
Values of the generic parameters.
Number of elements: GenParamValue == GenParamName
Suggested values: GenParamValue ∈ {0.15, 0.05, 100, ’true’, ’false’, ’cylinder’, ’sphere’, ’plane’, ’all’,
’least_squares’, ’least_squares_huber’, ’least_squares_tukey’}
. ObjectModel3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the output 3D object model.
Result
segment_object_model_3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
xyz_to_object_model_3d, read_object_model_3d, prepare_object_model_3d
Possible Successors
get_object_model_3d_params, object_model_3d_to_xyz, write_object_model_3d,
clear_object_model_3d
See also
fit_primitives_object_model_3d
Module
3D Metrology

select_points_object_model_3d ( : : ObjectModel3D, Attrib,


MinValue, MaxValue : ObjectModel3DThresholded )

Apply a threshold to an attribute of 3D object models.


select_points_object_model_3d selects points of the 3D object model ObjectModel3D according to
the attributes and thresholds passed in Attrib, MinValue, and MaxValue respectively. The selected points
are returned in the 3D object model ObjectModel3DThresholded. All attributes that are connected with the
points (e.g., polygons or triangles) are adapted in such a way that there is no reference to the removed points left.
Attrib can either contain a tuple of numbers that has the same length as ObjectModel3D has points, or a list
of attribute names on which the thresholds are applied.
If Attrib contains a tuple of numbers, exactly one number must be passed in both MinValue and MaxValue.
All points for which the corresponding entry in Attrib is between the two thresholds are added to the output 3D
object model ObjectModel3DThresholded.
Otherwise, Attrib can contain a list of attribute names that refer to properties of the 3D object model
ObjectModel3D. All points, for which the value stored in the attribute Attrib is inside the interval speci-
fied in MinValue and MaxValue are stored in the output 3D object model. MinValue and MaxValue must
contain exactly as many values as Attrib. If Attrib contains multiple values, only those points are stored in
the output 3D object model that fulfill all the criteria.

HALCON 24.11.1.0
206 CHAPTER 4 3D OBJECT MODEL

Depending on the properties of ObjectModel3D, the following values are possible for Attrib:
The following attributes are available:

’point_coord_x’: The x-coordinates of the set of 3D points.


’point_coord_y’: The y-coordinates of the set of 3D points.
’point_coord_z’: The z-coordinates of the set of 3D points.
’point_normal_x’: The x-components of the 3D point normals of the set of 3D points.
’point_normal_y’: The y-components of the 3D point normals of the set of 3D points.
’point_normal_z’: The z-components of the 3D point normals of the set of 3D points.
’mapping_row’: The row-components of the 2D mapping of the set of 3D points.
’mapping_col’: The column-components of the 2D mapping of the set of 3D points.
’neighbor_distance’:
’neighbor_distance N’: The distance of the N-th nearest point. N must be a positive integer and is by default 25.
For every point, all other points are sorted according to their distance and the distance of the N-th point is
used.
’num_neighbors X’: The number of neighbors within a distance of at most X. It can be used to remove sparsely
populated parts of the 3D object model, such as outliers or points that are created by smoothing between 3D
surfaces.
’num_neighbors_fast X’: The approximate number of neighbors within a distance of at most X. The distances are
approximated using voxels, leading to a faster processing compared to ’num_neighbors’.
Extended attribute: Enter the name of an extended attribute of the type ’vertices’ and the selection will be applied
based on the values of the extended attribute.

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object models.
. Attrib (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Attributes the threshold is applied to.
Default: ’point_coord_z’
List of values: Attrib ∈ {’point_coord_x’, ’point_coord_y’, ’point_coord_z’, ’point_normal_x’,
’point_normal_y’, ’point_normal_z’, ’mapping_row’, ’mapping_col’, ’neighbor_distance’, ’num_neighbors’,
’num_neighbors_fast’}
. MinValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Minimum value for the attributes specified by Attrib.
Default: 0.5
. MaxValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Maximum value for the attributes specified by Attrib.
Default: 1.0
. ObjectModel3DThresholded (output_control) . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the reduced 3D object models.
Example

gen_object_model_3d_from_points (rand(100), rand(100),\


rand(100), ObjectModel3D)
select_points_object_model_3d (ObjectModel3D, 'point_coord_z',\
0.5, 1, ObjectModel3DThresholded)
get_object_model_3d_params (ObjectModel3DThresholded, 'num_points',\
NumPoints)

Result
select_points_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised. If the required points are missing in the object model, i.e., an empty object model is passed,
the error 9515 is raised.
Execution Information

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 207

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
connection_object_model_3d, project_object_model_3d, object_model_3d_to_xyz
See also
connection_object_model_3d, reduce_object_model_3d_by_view
Module
3D Metrology

4.4 Transformations

affine_trans_object_model_3d ( : : ObjectModel3D,
HomMat3D : ObjectModel3DAffineTrans )

Apply an arbitrary affine 3D transformation to 3D object models.


affine_trans_object_model_3d applies arbitrary affine 3D transformations, i.e., scaling, rotation, and
translation, to 3D object models and returns the handles of the transformed 3D object models. The affine transfor-
mations are described by the homogeneous transformation matrices given in HomMat3D.
The transformation matrices can be created using the operators hom_mat3d_identity, hom_mat3d_scale,
hom_mat3d_rotate, hom_mat3d_translate, etc., or it can be the result of pose_to_hom_mat3d (see
affine_trans_point_3d).
In general, the operator affine_trans_object_model_3d is not necessary in the context of shape based
3D matching. Instead, if a rotation of the 3D object model into a reference orientation should be performed,
appropriate values for the parameters RefRotX, RefRotY, RefRotZ, and OrderOfRotation should be
passed to the operator create_shape_model_3d.
affine_trans_object_model_3d transforms one or more 3D object models with the same transformation
matrix if only one transformation matrix is passed in HomMat3D (N:1). If a single 3D object model is passed in
ObjectModel3D, it is transformed with all passed transformation matrices (1:N). If the number of transforma-
tion matrices corresponds to the number of 3D object models, every 3D object model is transformed individually
with the respective transformation matrix (N:N). In those cases, N can be zero, i.e., no matrix or no 3D object model
can be passed to the operator. In this case, an empty tuple is returned in ObjectModel3DAffineTrans. This
can be used to, for example, transform the results of other operators without checking first if at least one matrix
was returned.
Attention
affine_trans_object_model_3d transforms the attributes of type 3D points, 3D point normals, and the
prepared shape model for shape-based 3D matching. Primitives and precomputed data structures for 3D distance
computation are not copied. All other attributes are copied without modification. To transform 3D primitives, the
operator rigid_trans_object_model_3d must be used.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the 3D object models.
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d(-array) ; real
Transformation matrices.
. ObjectModel3DAffineTrans (output_control) . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handles of the transformed 3D object models.
Result
affine_trans_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

HALCON 24.11.1.0
208 CHAPTER 4 3D OBJECT MODEL

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz
See also
affine_trans_point_3d, rigid_trans_object_model_3d,
projective_trans_object_model_3d
Module
3D Metrology

connection_object_model_3d ( : : ObjectModel3D, Feature,


Value : ObjectModel3DConnected )

Determine the connected components of the 3D object model.


connection_object_model_3d determines the connected components of the input 3D object model given
in ObjectModel3D. The decision if two parts of the 3D object model are connected can be based on different
attributes and respective distance functions. The attribute and distance function can be selected in Feature:
’distance_3d’: The euclidean distance between the point coordinates of the set of the 3D points are tested. For
any distance below Value the points are considered as connected.
’angle’: The angles between the normals of the points in the 3D object model are compared. Similar normals are
considered as connected if their angular distance is below Value. Value is specified in radians and should
be between 0 and pi.
Prerequisite: The 3D object model must contain normals, which can be computed with
surface_normals_object_model_3d.
’distance_mapping’: The mapping measures the distance between the pixel coordinates of points in the 3D object
model that are stored in the 2D mapping. Use a value larger than 1.5 for Value to get a connection in an
8-neighborhood in the image.
Prerequisite: The 3D object model must contain a 2D mapping, which is available if the 3D object model has
been created with xyz_to_object_model_3d.
’mesh’: Returns parts of the 3D object model that are connected with triangles or polygons. Value is ignored.
Prerequisite: The 3D object model must provide a triangulation, which can be obtained with
triangulate_object_model_3d. Alternatively, if the 3D object model already contains a 2D map-
ping, prepare_object_model_3d can be used with Purpose set to ’segmentation’ to quickly trian-
gulate the 3D object model.
’lines’: Returns parts of the object model that are connected by lines. Value is ignored.
Prerequisite: The 3D object model must contain polylines, which can be computed with
intersect_plane_object_model_3d.
Alternatively, the required attributes can be set manually with set_object_model_3d_attrib or
set_object_model_3d_attrib_mod. Note that the 3D object model might already contain the required
attribute, especially if the 3D object model has been read with read_object_model_3d or if it has been de-
serialized with deserialize_object_model_3d. To check whether the required attribute is available, use
get_object_model_3d_params.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Attribute used to calculate the connected components.
Default: ’distance_3d’
List of values: Feature ∈ {’distance_3d’, ’angle’, ’distance_mapping’, ’mesh’, ’lines’}

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 209

. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer


Maximum value for the distance between two connected components.
Default: 1.0
Suggested values: Value ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
. ObjectModel3DConnected (output_control) . . . . . . . . . . . . . . . . . . . . . . . object_model_3d-array ; handle
Handle of the 3D object models that represent the connected components.
Example

gen_object_model_3d_from_points (rand(100), rand(100),\


rand(100), ObjectModel3D)
connection_object_model_3d (ObjectModel3D, 'distance_3d', 0.2,\
ObjectModel3DConnected)
dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, [ObjectModel3DConnected], [], [],\
['colored'], [12], [], [], [], PoseOut)

Result
connection_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, select_points_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz, select_object_model_3d
See also
select_object_model_3d, select_points_object_model_3d
Module
3D Metrology

convex_hull_object_model_3d (
: : ObjectModel3D : ObjectModel3DConvexHull )

Calculate the convex hull of a 3D object model.


convex_hull_object_model_3d calculates the convex hull of the 3D object model given in
ObjectModel3D. The operator returns the convex hull as a 3D object model with the handle
ObjectModel3DConvexHull.
If one of the dimensions of the input points has no deviation at all, the result will consist of lines and not triangles.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. ObjectModel3DConvexHull (output_control) . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model that describes the convex hull.
Number of elements: ObjectModel3DConvexHull == ObjectModel3D
Example

gen_object_model_3d_from_points (rand(20)-0.5, rand(20)-0.5,\


rand(20)-0.5, ObjectModel3D)

HALCON 24.11.1.0
210 CHAPTER 4 3D OBJECT MODEL

convex_hull_object_model_3d (ObjectModel3D, ObjectModel3DConvexHull)


dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, [ObjectModel3DConvexHull], \
[], [], [], [], [], [], [], PoseOut)

Result
convex_hull_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, connection_object_model_3d,
select_points_object_model_3d
Possible Successors
project_object_model_3d
Module
3D Metrology

edges_object_model_3d ( : : ObjectModel3D, MinAmplitude,


GenParamName, GenParamValue : ObjectModel3DEdges )

Find edges in a 3D object model.


edges_object_model_3d finds 3D edges in the 3D object model ObjectModel3D and returns them in the
3D object model ObjectModel3DEdges.
The operator supports edge extraction only from 3D object models that contain a XYZ mapping, such as models
that were created with xyz_to_object_model_3d or that were obtained with a sensor that delivers the map-
ping. MinAmplitude defines the minimum amplitude of a discontinuity in order to be classified as an edge. It
is given in the same unit as used in ObjectModel3D.
The extracted edges are a subset of the points of the input object model. In addition to the coordinates of the edges,
the point normal vectors in ObjectModel3DEdges contain the viewing direction of each 3D edge point from
the viewpoint towards the edge point. Also, the attributes ’edge_dir_x’, ’edge_dir_y’ and ’edge_dir_z’ contain a
vector that is perpendicular to the edge direction and to the viewing direction. The attributes are set such that the
3D object model can be used for edge-supported surface-based matching in find_surface_model.
Generic parameters can optionally be used to influence the edge extraction. If desired, these parameters and their
corresponding values can be specified with GenParamName and GenParamValue. The following values for
GenParamName are possible:

’max_gap’: This parameter specifies the maximum gap size in pixels in the XYZ-images that are closed. Gaps
larger than this value will contain edges at their boundary, while gaps smaller than this value will not. This
suppresses edges around smaller patches that were not reconstructed by the sensor as well as edges at the
more distant part of a discontinuity. For sensors with very large resolutions, the value should be increased to
avoid spurious edges.
Default: 30.
’estimate_viewpose’: This parameter can be used to turn off the automatic viewpose estimation and set a manual
viewpoint.
Default: ’true’.
’viewpoint’: This parameter only has an effect when ’estimate_viewpose’ is set to ’false’. It specifies the viewpoint
from which the 3D data is seen. It is used to determine the viewing directions and edge directions. It defaults
to the origin ’0 0 0’ of the 3D data. If the projection center is at a different location, for example, if the 3D

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 211

object model was transformed with rigid_trans_object_model_3d or if the 3D sensor performed


a similar transformation, the original viewpoint must be set. For this, GenParamValue must contain a
string consisting of the three coordinates (x, y and z) of the viewpoint, separated by spaces. The viewpoint is
defined in the same coordinate frame as ObjectModel3D. Note that for use of this parameter, the values in
the X-, Y-, and Z- images obtained from object_model_3d_to_xyz must have increasing values from
left to right, top to bottom, and for object parts further away from the camera, respectively.
Default: ’0 0 0’.

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model whose edges should be computed.
. MinAmplitude (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Edge threshold.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’max_gap’, ’estimate_viewpose’, ’viewpoint’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’0 0 0’, 10, 30, 100, ’true’, ’false’}
. ObjectModel3DEdges (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
3D object model containing the edges.
Result
edges_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
find_surface_model, find_surface_model_image, refine_surface_model_pose,
refine_surface_model_pose_image
Module
3D Metrology

fuse_object_model_3d ( : : ObjectModel3D, BoundingBox, Resolution,


SurfaceTolerance, MinThickness, Smoothing, NormalDirection,
GenParamName, GenParamValue : ObjectModel3DFusion )

Fuse 3D object models into a surface.


fuse_object_model_3d fuses multiple point clouds representing an object surface into a watertight surface
ObjectModel3DFusion. The operator can be used to simplify the postprocessing step of point clouds that
are already registered in the same coordinate system. In particular, unification, suppression of outliers, trade-off
between smoothing and preservation of edges, equidistant sub-sampling, hole filling, and meshing of the output
surface can often be handled nicely and in high quality. On the other hand, these advantages come at the price of a
high runtime.
If you want to fuse 3D point clouds acquired by stereo reconstruction, you should use
reconstruct_surface_stereo instead of fuse_object_model_3d.
Workflow

HALCON 24.11.1.0
212 CHAPTER 4 3D OBJECT MODEL

1. Acquire point clouds and transform them into a common coordinate system, for example using
register_object_model_3d_pair and register_object_model_3d_global.
2. If not already available, compute triangles or point normals for the point clouds using
triangulate_object_model_3d or surface_normals_object_model_3d. A triangu-
lation is more suitable if you have surfaces with many outliers or holes that should be closed. Otherwise, for
clean surfaces, you can work with normals.
3. Inspect the normals of the input models using visualize_object_model_3d with GenParamName
’disp_normals’ or dev_inspect_ctrl. The point or triangle normals have to be oriented consistently
towards the inside or outside of the object. Set NormalDirection accordingly to ’inwards’ or ’outwards’.
4. Specify the volume of interest in BoundingBox. To obtain a first guess for BoundingBox, use
get_object_model_3d_params with GenParamName set to ’bounding_box1’.
5. Specify an initial set of parameters: a rough Resolution (e.g., 1/100 of the diameter of the
BoundingBox), SurfaceTolerance at least a bit larger (e.g., 5*Resolution), MinThickness
as the minimum thickness of the object (if the input point clouds represent the object only from one side, set
it very high, so that the object is cut off at the BoundingBox), Smoothing set to 1.0.
6. Apply fuse_object_model_3d and readjust the parameters to improve the results with respect to quality
and runtime, see below. Use a Resolution just fine enough to make out the details of your object while
tuning the other parameters, in order to avoid long runtimes. Also consider using the additional parameters
in GenParamName.

Parameter Description
See the HDevelop example fuse_object_model_3d_workflow for an explanation how to fine-tune the
parameters for your application.
The input point clouds ObjectModel3D have to lie in a common coordinate system and add up to the initial
surface. Furthermore, they must contain triangles or point normals. If both attributes are present, normals are
used as a default due to speed advantages. If triangles should be used, use copy_object_model_3d to obtain
only point and triangle information. Surfaces with many outliers or holes to be closed should be used with a
triangulation, clean surfaces with normals. The point or triangle normals have to be oriented consistently towards
the inside or outside of the object.
NormalDirection is used to specify whether the point or triangle normals point ’inwards’ or ’outwards’. If
only one value is specified, it is applied to all input models. Otherwise, the number of values has to equal the
number of input models.
BoundingBox specifies the volume of interest to be taken into account for input and output. Note that points
outside the bounding box are discarded. Triangles of the input point cloud with a point outside the BoundingBox
are discarded, not clipped. The BoundingBox is specified as a tuple [x1,y1,z1,x2,y2,z2] assigning two
opposite corner points P1=[x1,y1,z1] and P2=[x2,y2,z2] of the rectangular cuboid (with edges parallel
to the coordinate axes). For a valid bounding box, P1 must be the point on the front lower left corner and P2 on
the back upper right corner of the bounding box, i.e., x1<x2, y1<y2 and z1<z2. Note that the operator will
try to produce a closed surface. If the input point clouds represent the object from only one point of view, one
wants the bounding box usually to cut off the unknown part, wherefore MinThickness should be set e.g., to
a value larger than or equal to the length of the diagonal of the bounding box (which can be obtained by using
get_object_model_3d_params with the parameter ’diameter_axis_aligned_bounding_box’). An object
cut off by a surface of the bounding box has no points at this specific surface, thus has a hole. Note also that you
may have to rotate the input point clouds in order make the bounding box cut off the unknown part in the right
place, since the edges of the bounding box are always parallel to the coordinate axes. This can be achieved e.g.,
using affine_trans_object_model_3d or rigid_trans_object_model_3d.
Resolution specifies the distance of neighboring grid points in each coordinate direction in the discretization
of the BoundingBox. Resolution is set in the same unit as used in ObjectModel3D. Too small values will
unnecessarily increase the runtime, so it is recommended to begin with a coarse resolution. Too large values will
lead to a reconstruction with high loss of details. Smoothing may need to be adapted when Resolution is
changed. Resolution should always be a bit smaller than SurfaceTolerance in order to avoid discretiza-
tion artifacts.
SurfaceTolerance specifies how much noise in the input point cloud should be combined to the surface from
its inside and outside. Sole exemption when SurfaceTolerance is larger than ’distance_in_front’, in that case
’distance_in_front’ determines the surface thickness to the front of the object. SurfaceTolerance is set in the
same unit as used in ObjectModel3D. Points in the interior of the object as specified by NormalDirection

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 213

(and also GenParamName=’angle_threshold’) are considered surely inside the object if their distance to the initial
surface exceeds SurfaceTolerance but is smaller than MinThickness. SurfaceTolerance always has
to be smaller than MinThickness. SurfaceTolerance should always be a bit larger than Resolution in
order to avoid discretization artifacts.
MinThickness specifies the thickness of the object in normal direction of the initial surfaces. MinThickness
is set in the same unit as used in ObjectModel3D. Points which are specified by NormalDirection (and
also GenParamName=’angle_threshold’) to be in the interior of the object are only considered as being inside if
their distance to the initial surface does not exceed MinThickness. Note that this can lead to a hollow part of
the object. MinThickness always has to be larger than SurfaceTolerance. For point clouds representing
the object from different sides, MinThickness is best set as the thickness of the objects narrowest part. Note
that the operator will try to produce a closed surface. If the input point clouds represent the object only from one
side, this parameter should be set very large, so that the object is cut off at the bounding box. The backside of the
objects is not observed and thus its reconstruction will probably be incorrect. If you observe several distinct objects
from only one side, you may want to reduce the parameter MinThickness to restrict the depth of reconstructed
objects and thus keep them from being smudged into one surface. Too small values can result in holes or double
walls in the fused point cloud. Too large values can result in a distorted point cloud or blow up the surface towards
the outside of the object (if the surface is blown up beyond the bounding box, no points will be returned).

o o o o
o o
c o
c c o o
c o
'distance_in_front' c o
s s o o
s o
s s s s s s s s
SurfaceTolerance s SurfaceTolerance s
'distance_in_front'
s s s s s s
s s s s s s s s s s s s
s SurfaceTolerance s s SurfaceTolerance s
s s s s
s s s s

i i
i i i i
i i i i
i i
i i i i
MinThickness i MinThickness i

i i i i i i
o o
o o o o
o o o o o o
o o

(1) (2)
Schematic view of the parameters SurfaceTolerance, MinThickness and the value ’distance_in_front’ with the
aid of an example surface (_.): o are points taken as outside, s are points of the surface, i are points surely inside
the object, c are points also considered for the evaluation of the surface. (1): ’distance_in_front’ smaller than
SurfaceTolerance (2): ’distance_in_front’ larger than SurfaceTolerance.

Smoothing determines how important a small total variation of the distance function is compared to data fidelity.
Thus, Smoothing regulates the ’jumpiness’ of the resulting surface. Note that the actual value of Smoothing
for a given data to result in an appropriate and visually pleasing surface has to be found by trial and error. Too
small values lead to integrating many outliers into the surface even if the surface then exhibits many jumps. Too
large values lead to lost fidelity towards the input point clouds (how the algorithm views distances to the input
point clouds depends heavily on SurfaceTolerance and MinThickness). Smoothing may need to be
adapted when Resolution is changed.
By setting GenParamName to the following values, the additional parameters can be set with GenParamValue:

’distance_in_front’ Points in the exterior of the object as specified by NormalDirection (and also
GenParamName=’angle_threshold’) are only considered as part of the object if their distance to the ini-
tial surface does not exceed ’distance_in_front’. This is the outside analogous to MinThickness of the
interior, except that ’distance_in_front’ does not have to be larger than SurfaceTolerance. In case ’dis-
tance_in_front’ is smaller than SurfaceTolerance it determines the surface thickness to the front. This
parameter is useful if holes in the surface should be closed along a jump in the surface (for example along
the viewing direction of the sensor). In this case, ’distance_in_front’ can be set to a small value in order
to avoid a wrong initialization of the distance field. ’distance_in_front’ is set in the same unit as used in
ObjectModel3D. ’distance_in_front’ should always be a bit larger than Resolution in order to avoid
discretization artifacts. Per default, ’distance_in_front’ is set to a value larger than the bounding box diameter,
therewith all points outside of the object in the bounding box are considered.

HALCON 24.11.1.0
214 CHAPTER 4 3D OBJECT MODEL

Suggested values: 0.001, 0.1, 1, 10.


Default: Larger than the bounding box diameter.
Restriction: ’distance_in_front’ > 0
’angle_threshold’ specifies the angle of a cone around a surface normal. ’angle_threshold’ is set in [rad]. When
determining the distance information for data fidelity, only points are considered lying in such a cone starting
at their closest surface point. For example, if distances to triangles are considered, ’angle_threshold’ can
be set to 0.0, so that only the volume directly above the triangle is considered (thus a right prism). If point
normals are used and thus distances to normals are considered, ’angle_threshold’ has to be set to a higher
value. When outliers disrupt the result, decreasing ’angle_threshold’ may help. If holes in the surface should
be closed along a jump in the surface (for example along the viewing direction of the sensor), enlarging
’angle_threshold’ may help.
Suggested values: ’rad(0.0)’, ’rad(10.0)’, ’rad(30.0)’.
Default: ’rad(10.0)’.
Restriction: ’angle_threshold’ >= 0
’point_meshing’ determines whether the output points should be triangulated with the algorithm ’marching tetra-
hedra’, which can be activated by setting ’point_meshing’ to ’isosurface’. Note that there are more points
in ObjectModel3DFusion if meshing of the isosurface is enabled even if the used Resolution is the
same.
List of values: ’none’, ’isosurface’.
Default: ’isosurface’.

Fusion algorithm
The algorithm will produce a watertight, closed surface (which is maybe cut off at the BoundingBox). The goal
is to obtain a preferably smooth surface while keeping form fidelity. To this end, the bounding box is sampled and
each sample point is assigned an initial distance to a so-called isosurface (consisting of points with distance 0).
The final distance values (and thus the isosurface) are obtained by minimizing an error function based on fidelity
to the initial point clouds on the one hand and total variation (’jumpiness’) of the distance function on the other
hand. This leads to a fusion of the input point clouds (see paper in References below).
The calculation of the isosurface can be influenced with the parameters of the operator. The distance between
sample points in the bounding box (in each coordinate direction) can be set with the parameter Resolution.
Fidelity to the initial point clouds is grasped as the signed distances of sample points, lying on the grid, in the
bounding box to their nearest neighbors (points or triangles) on the input point clouds. Whether a sample point in
the bounding box is considered to lie outside or inside the object (the sign of the distance) is determined by the
normal of its nearest neighbor on the initial surface and the set NormalDirection. To determine if a sample
point is surely inside or outside the object with respect to an input point cloud, the distance to its nearest neighbor
on the initial surface is determined. A point on the inside is considered surely inside if the distance exceeds
SurfaceTolerance but not MinThickness, while a point on the outside counts as exteriorly if the distance
exceeds ’distance_in_front’.
Fidelity to the initial point clouds is only considered for those sample points lying within MinThickness inside
or within GenParamName ’distance_in_front’ outside the initial surface.
Furthermore, fidelity is not maintained for a given sample point lying outside a cone around GenParamName
’angle_threshold’. Thus it is not maintained if the line from the sample point to its nearest neighbor on the
initial surface differs from the surface normal of the nearest neighbor by an angle more than GenParamName
’angle_threshold’. Note that the distances to nearest neighboring triangles will often yield more satisfying results
while distances to nearest points can be calculated much faster.
The subsequent optimization of the distance values is the same as the one used in
reconstruct_surface_stereo with Method=’surface_fusion’.
The parameter Smoothing regulates the ’jumpiness’ of the distance function by weighing the two terms in the
error function: Fidelity to the initial point clouds on the one hand, total variation of the distance function on the
other hand. Note that the actual value of Smoothing for a given data set to be visually pleasing has to be found
by trial and error.
Each 3D point of the object model returned in ObjectModel3DFusion is extracted from the isosurface where
the distance function equals zero. Its normal vector is calculated from the gradient of the distance function. The so-
obtained point cloud can also be meshed using the algorithm ’marching tetrahedra’ by setting the GenParamName
’point_meshing’ to the GenParamValue ’isosurface’.

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 215

Troubleshooting
Please follow the workflow above. If the results are not satisfactory, please consult the following hints and ideas:

Quality of the input point clouds The input point clouds should represent the entire object surface. If point nor-
mals are used, the points should be dense on the entire surface, not only along edges of the object. In
particular, for CAD-data typically triangulation has to be used.
Used attribute Using triangles instead of point normals will typically yield results of higher quality. If
both attributes are present, point normals are used per default. If triangles should be used, use
copy_object_model_3d to obtain only point and triangle information.
Outliers If outliers of the input models disturb the output surface even for high values of Smoothing, try
to decrease GenParamName ’angle_threshold’. If wanted, outliers of the input models can also be re-
moved, for example using connection_object_model_3d. With reduced influence also modifying
GenParamName ’distance_in_front’ may help to reduce certain outliers.
Closing of holes If holes in the surface are not closed even for high values of Smoothing (for example a
jump in the surface along the viewing direction of the sensor), try to decrease GenParamName ’dis-
tance_in_front’. Enlarging GenParamName ’angle_threshold’ may help the algorithm to close the gap.
Note that triangulate_object_model_3d can close gaps when triangulating sensor data which con-
tains a 2D mapping.
Empty output If the output contains no point, try to decrease Smoothing. If there is no output even for very
low values of Smoothing, you may want to check if MinThickness is set too large and if the set
NormalDirection is correct.

Runtime
In order to improve the runtime, consider the following hints:

Extent of the bounding box The bounding box should be tight around the volume of interest. Else, the runtime
will increase drastically but without any benefit.
Resolution Enlarging the parameter Resolution will speed up the execution considerably.
Used attribute Using point normals instead of triangles will speed up the execution. If both, normals and triangles,
are present in the input models, normals are used per default.
Density of input point clouds The input point clouds can be thinned out using sample_object_model_3d
(if normals are used) or simplify_object_model_3d with GenParamName ’avoid_triangle_flips’
set to ’true’ (if triangles are used).
Distances to surface Make sure that MinThickness and GenParamName ’distance_in_front’ are not set un-
necessarily large, since this can slow down the preparation and distance computation.

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the 3D object models.
. BoundingBox (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
The two opposite bound box corners.
. Resolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Used resolution within the bounding box.
Default: 1.0
Suggested values: Resolution ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
. SurfaceTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Distance of expected noise to surface.
Default: 1.0
Suggested values: SurfaceTolerance ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
. MinThickness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum thickness of the object in direction of the surface normal.
Default: 1.0
Suggested values: MinThickness ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
. Smoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Weight factor for data fidelity.
Default: 1.0
Suggested values: Smoothing ∈ {1.0, 1.1, 1.5, 10.0, 100.0}

HALCON 24.11.1.0
216 CHAPTER 4 3D OBJECT MODEL

. NormalDirection (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string


Direction of normals of the input models.
Default: ’inwards’
List of values: NormalDirection ∈ {’inwards’, ’outwards’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Name of the generic parameter.
Default: []
List of values: GenParamName ∈ {’point_meshing’, ’angle_threshold’, ’distance_in_front’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Value of the generic parameter.
Default: []
Suggested values: GenParamValue ∈ {’isosurface’, ’none’, 0.0, 0.1, 0.175, 0.524}
. ObjectModel3DFusion (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the fused 3D object model.
Result
fuse_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
read_object_model_3d, register_object_model_3d_pair,
register_object_model_3d_global, surface_normals_object_model_3d,
triangulate_object_model_3d, simplify_object_model_3d,
get_object_model_3d_params
Possible Successors
write_object_model_3d, create_surface_model
See also
reconstruct_surface_stereo
References
C. Zach, T. Pock, and H. Bischof: “A globally optimal algorithm for robust TV-L1 range image integration.”
Proceedings of IEEE International Conference on Computer Vision (ICCV 2007).
Module
3D Metrology

intersect_plane_object_model_3d ( : : ObjectModel3D,
Plane : ObjectModel3DIntersection )

Intersect a 3D object model with a plane.


intersect_plane_object_model_3d intersects a 3D object model with a plane that is defined by the x-y
plane of the pose that is specified with the parameter Plane. The z-axis of the pose corresponds to the normal of
the plane.
The result is a set of 3D points connected by lines that is returned as 3D object model in
ObjectModel3DIntersection. Every triangle that intersects with the plane creates two intersection points
and a line between the two points. The resulting set of lines is coplanar.
The lines can be displayed with disp_object_model_3d and queried with
get_object_model_3d_params using the parameter ’lines’.
Parameter Broadcasting

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 217

This operator supports parameter broadcasting. This means that each parameter can be given as a tuple of length
1 (7 for Plane) or N (N*7 for Plane). Parameters with tuple length 1 (7 for Plane) will be repeated internally
such that the number of computed output models is always N.
Parameters

. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle


Handle of the 3D object model.
. Plane (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Pose of the plane.
Default: [0,0,0,0,0,0,0]
. ObjectModel3DIntersection (output_control) . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model that describes the intersection as a set of lines.
Example

gen_object_model_3d_from_points (rand(20)-0.5, rand(20)-0.5,\


rand(20)-0.5, ObjectModel3D)
convex_hull_object_model_3d (ObjectModel3D, ObjectModel3DConvexHull)
intersect_plane_object_model_3d (ObjectModel3DConvexHull, [0,0,0,0,0,0,0], \
ObjectModel3DIntersection)
dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, [ObjectModel3DIntersection, \
ObjectModel3DConvexHull], [], [], \
['alpha_1'], [0.5], [], [], [], PoseOut)

Result
intersect_plane_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, select_points_object_model_3d
Possible Successors
connection_object_model_3d
See also
reduce_object_model_3d_by_view
Module
3D Metrology

object_model_3d_to_xyz ( : X, Y, Z : ObjectModel3D, Type,


CamParam, Pose : )

Transform 3D points from a 3D object model to images.


The operator object_model_3d_to_xyz transforms the 3D points of the 3D object model
ObjectModel3D into the three images X, Y, and Z.
Three transformation modes are possible. The parameter Type is used to select one of them. Note that multiple 3D
object models can be passed in ObjectModel3D only for the mode ’cartesian_faces’. All other modes expect a
single 3D object model.

HALCON 24.11.1.0
218 CHAPTER 4 3D OBJECT MODEL

’cartesian’: First, each point is transformed into the camera coordinate system using the given Pose. Then,
these coordinates are projected into the image coordinate system based on the internal camera parameters
CamParam.
The internal camera parameters CamParam describe the projection characteristics of the camera (see Cali-
bration). The Pose is in the form ccs Pmcs , where ccs denotes camera coordinate system and mcs the model
coordinate system (which is a 3D world coordinate system), see Transformations / Poses and “Solution
Guide III-C - 3D Vision”. Hence, it describes the position and orientation of the model coordinate
system relative to the camera coordinate system.
The X-, Y-, and Z-coordinates of the transformed point are written into the corresponding image at the
position of the projection. If multiple points are projected to the same image coordinates, the point with the
smallest Z-value is written (hidden-point removal). The dimensions of the returned images are defined by the
camera parameters.
The returned images show the object as it would look like when seeing it with the specified camera under the
specified pose.
’cartesian_faces’: In order to use this transformation, the input 3D object models need to contain faces (tri-
angles or polygons), otherwise, the 3D object model without faces is disregarded. Note that if the 3D
object models have polygon faces, those are converted internally to triangles. This conversion can be
done beforehand to speed up this operator. For this, read_object_model_3d can be called with
the GenParamName ’convert_to_triangles’ set to ’true’, to convert all faces to triangles. Alternatively,
triangulate_object_model_3d can be called prior to this operator.
First, each face of the 3D object models ObjectModel3D is transformed into the camera coordinate system
using the given Pose. Then, these coordinates are projected into the image coordinate system based on the
internal camera parameters CamParam, while keeping the 3D information (X-, Y-, and Z-coordinates) for
each of those pixels. For a more detailed explanation of CamParam and Pose please refer to the section
’cartesian’. If multiple faces are projected to the same image coordinates, the value with the smallest Z-
value is written (hidden-point removal). The dimensions of the returned images are defined by the camera
parameters.
The returned images show the objects as they would look like when seeing them with the specified camera
under the specified pose.
In case that OpenGL 2.1, GLSL 1.2, and the OpenGL extensions GL_EXT_framebuffer_object and
GL_EXT_framebuffer_blit are available, speed increases.
This Type can be used to create 3D object models containing 2D mapping data, by creating a 3D ob-
ject model from the returned images using xyz_to_object_model_3d. Note that in many cases, it is
recommended to use the 2D mapping data, if available, for speed and robustness reasons. This is beneficial
for example when using sample_object_model_3d, surface_normals_object_model_3d, or
when preparing a 3D object model for surface-based matching, e.g., smoothing, removing outliers, and re-
ducing the domain.
’cartesian_faces_no_opengl’: This transformation mode works in the same way as the method ’cartesian_faces’
but does not use OpenGL. In general, ’cartesian_faces’ automatically determines if OpenGL is available.
Thus, it is usually not required to use ’cartesian_faces_no_opengl’ explicitly. It can make sense, however,
to use it in cases where the automatic mode selection does not work due to, for example, driver issues with
OpenGL.
’from_xyz_map’: This transformation mode works only if the 3D object model was created with the operator
xyz_to_object_model_3d. It writes each 3D point to the image coordinate where it originally came
from, using the mapping attribute that is stored within the 3D object model.
The parameters CamParam and Pose are ignored. The dimensions of the returned images are equal to
the dimensions of the original images that were used with xyz_to_object_model_3d to create the 3D
object model and can be queried from get_object_model_3d_params with ’mapping_size’.
This transformation mode is faster than ’cartesian’. It is suitable, e.g., to visualize the results of a segmenta-
tion done with segment_object_model_3d.
Attention
Cameras with hypercentric lenses are not supported. For displaying large faces with a non-zero distortion in
CamParam, note that the distortion is only applied to the points of the model. In the projection, these points are
subsequently connected by straight lines. For a good approximation of the distorted lines, please use a triangulation
with sufficiently small triangles.

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 219

Parameters
. X (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the X-Coordinates of the 3D points.
. Y (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the Y-Coordinates of the 3D points.
. Z (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the Z-Coordinates of the 3D points.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the conversion.
Default: ’cartesian’
List of values: Type ∈ {’cartesian’, ’cartesian_faces’, ’from_xyz_map’, ’cartesian_faces_no_opengl’}
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the 3D object model.
Number of elements: Pose == 0 || Pose == 7 || Pose == 12
Result
The operator object_model_3d_to_xyz returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, triangulate_object_model_3d
Alternatives
project_object_model_3d
See also
xyz_to_object_model_3d, get_object_model_3d_params
Module
3D Metrology

prepare_object_model_3d ( : : ObjectModel3D, Purpose,


OverwriteData, GenParamName, GenParamValue : )

Prepare a 3D object model for a certain operation.


The operator prepare_object_model_3d prepares the 3D object model ObjectModel3D for a following
operation given in Purpose. It computes values required for the operation and stores them in ObjectModel3D,
thus speeding up the following operation. It is not necessary to call prepare_object_model_3d. However,
if the 3D object model is to be used multiple times for the same operation, it can be faster to do so.
The following values are possible for Purpose:

’shape_based_matching_3d’: The 3D object model is prepared to be used in create_shape_model_3d. For


this, there are no generic parameters to set.
’segmentation’: The 3D object model is prepared to be used in segment_object_model_3d. For the prepa-
ration the 3D object model must have an attribute with the face triangles and an attribute with the 3D point
coordinates.
If the 3D object model has no attribute with the face triangles, a simple triangulation is performed (even if
OverwriteData is set to ’false’). For this, the 3D object model must have an attribute with the 3D point

HALCON 24.11.1.0
220 CHAPTER 4 3D OBJECT MODEL

coordinates and an attribute with the mapping from the point coordinates to image coordinates. Only points
originating from neighboring pixels are triangulated. Additionally, holes in the image region can be filled
with a Delaunay triangulation (see ’max_area_holes’ below). Only holes which are completely surrounded
by the image region are closed.
’distance_computation’: The 3D object model is prepared to be used in distance_object_model_3d.
’gen_xyz_mapping’: The XYZ-mapping information of a 3D object model containing an ordered point cloud is
computed, i.e. image coordinates are assigned for each 3D point. For this, either the generic parameter
’xyz_map_width’ or ’xyz_map_height’ must be set, to indicate whether the point cloud is ordered row-wise
or column-wise and define the image dimensions (see ’xyz_map_width’ and ’xyz_map_height’ below).
Note that in many cases, it is recommended to use the 2D mapping data, if available, for speed
and robustness reasons. This is beneficial especially when using sample_object_model_3d,
surface_normals_object_model_3d, or when preparing a 3D object model for surface-based
matching, e.g., smoothing, removing outliers, and reducing the domain.
The parameter OverwriteData defines if the existing data of an already prepared 3D object model shall be
removed. If OverwriteData is set to ’true’, the prepared data, defined with the parameter Purpose, is over-
written. If OverwriteData is set to ’false’, the prepared data is not overwritten. If there is no prepared data
OverwriteData is ignored and data is saved in a 3D object model. The parameter OverwriteData can be
used for choosing another set of generic parameters GenParamName and GenParamValue. The parameter
OverwriteData has no influence if the parameter Purpose is set to ’shape_based_matching_3d’, because for
that, there are no generic parameters to set.
The generic parameters can optionally be used to influence the preparation. If desired, these parameters and
their corresponding values can be specified by using GenParamName and GenParamValue, respectively. The
following values for GenParamName are possible:
’max_area_holes’: This parameter is only valid if Purpose is set to ’segmentation’. The parameter specifies
which area holes of the point coordinates are closed during a simple Delaunay triangulation. Only holes
which are completely surrounded by the image region are closed. If ’max_area_holes’ is set to 0, no holes
are triangulated. If the parameter ’max_area_holes’ is set greater or equal than 1 pixel, the holes with an area
less or equal than ’max_area_holes’ are closed by a meshing.
Suggested values: 1, 10, 100.
Default: 10.
’distance_to’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter specifies
the type of data to which the distance shall be computed to. It is described in more detail in the documentation
of distance_object_model_3d.
List of values: ’auto’, ’triangles’, ’points’, ’primitive’.
Default: ’auto’.
’method’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter specifies
the method to be used for the distance computation. It is described in more detail in the documentation of
distance_object_model_3d.
List of values: ’auto’, ’kd-tree’, ’voxel’, ’linear’.
Default: ’auto’.
’max_distance’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter speci-
fies the maximum distance of interest for the distance computation. If it is set to 0, no maximum distance is
used. It is described in more detail in the documentation of distance_object_model_3d.
Suggested values: 0, 0.1, 1, 10.
Default: 0.
’sampling_dist_rel’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter
specifies the relative sampling distance when computing the distance to triangles with the method ’voxel’. It
is described in more detail in the documentation of distance_object_model_3d.
Suggested values: 0.03, 0.01.
Default: 0.03.
’sampling_dist_abs’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter
specifies the absolute sampling distance when computing the distance to triangles with the method ’voxel’. It
is described in more detail in the documentation of distance_object_model_3d.
Suggested values: 1, 5, 10.
Default: None.

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 221

’xyz_map_width’: This parameter is only valid if Purpose is set to ’gen_xyz_mapping’. The parameter in-
dicates that the point cloud is ordered row-wise and the passed value is used as the width of the image.
The height of the image is calculated automatically. Only one of the two parameters ’xyz_map_width’ and
’xyz_map_height’ can be set.
Default: None.
’xyz_map_height’: This parameter is only valid if Purpose is set to ’gen_xyz_mapping’. The parameter indi-
cates that the point cloud is ordered column-wise and the passed value is used as the height of the image.
The width of the image is calculated automatically. Only one of the two parameters ’xyz_map_width’ and
’xyz_map_height’ can be set.
Default: None.

Parameters

. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle


Handle of the 3D object model.
. Purpose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Purpose of the 3D object model.
Default: ’shape_based_matching_3d’
Suggested values: Purpose ∈ {’shape_based_matching_3d’, ’segmentation’, ’distance_computation’,
’gen_xyz_mapping’}
. OverwriteData (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Specify if already existing data should be overwritten.
Default: ’true’
List of values: OverwriteData ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string / real / integer
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’max_area_holes’, ’distance_to’, ’method’, ’max_distance’,
’sampling_dist_rel’, ’sampling_dist_abs’, ’xyz_map_width’, ’xyz_map_height’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, 100, ’auto’, ’triangles’, ’points’, ’primitive’, ’kd-tree’, ’voxel’,
’linear’, 0.01, 0.03}
Example

read_object_model_3d ('object_model_3d', 'm', [], [], ObjectModel3D, Status)


prepare_object_model_3d (ObjectModel3D, 'gen_xyz_mapping', 'true',\
'xyz_map_width', Width)
object_model_3d_to_xyz (X, Y, Z, ObjectModel3D, 'from_xyz_map', [], [])

Result
The operator prepare_object_model_3d returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
create_shape_model_3d, create_surface_model, distance_object_model_3d,
find_surface_model, fit_primitives_object_model_3d, refine_surface_model_pose,

HALCON 24.11.1.0
222 CHAPTER 4 3D OBJECT MODEL

segment_object_model_3d, simplify_object_model_3d, sample_object_model_3d,


surface_normals_object_model_3d
Module
3D Metrology

project_object_model_3d ( : ModelContours : ObjectModel3D,


CamParam, Pose, GenParamName, GenParamValue : )

Project a 3D object model into image coordinates.


The operator project_object_model_3d projects a 3D object model into the image coordinate system and
returns the projected contours in ModelContours. This operator is particularly useful for the visualization of 3D
object models. Note that primitives are not projected but silently ignored. The coordinates of the 3D object model
are given in the model coordinate system (mcs), a 3D world coordinate system. First, they are transformed into
the camera coordinate system (ccs) using the given Pose. Then, these coordinates are projected into the image
coordinate system based on the internal camera parameters CamParam. Thereby the pose is needed in the form
ccs
Pmcs , see Transformations / Poses and “Solution Guide III-C - 3D Vision”. Thus, the Pose
describes the position and orientation of the world coordinate system with respect to the camera coordinate system.
The internal camera parameters CamParam describe the projection characteristics of the camera (see Calibration).
There are some generic parameters that can optionally be used to influence the projection. If desired, these param-
eters and their corresponding values can be specified by using GenParamName and GenParamValue, respec-
tively. The following values for GenParamName are possible:

’data’: This parameter specifies which geometric data of the 3D object model should be projected. If ’data’ is
set to ’faces’, the faces of the 3D object model are projected. The faces are represented by their border
lines in ModelContours. If ’data’ is set to ’lines’, the 3D lines of the 3D object model are projected.
If ’data’ is set to ’points’, the points of the 3D object model are projected. The projected points can be
represented in ModelContours in different ways. The point representation can be selected by using the
generic parameter ’point_shape’ (see below). Finally, if ’data’ is set to ’auto’, HALCON automatically
chooses the most descriptive geometry data that is available in the 3D object model for visualization.
List of values: ’auto’, ’faces’, ’lines’, ’points’.
Default: ’auto’.
’point_shape’: This parameter specifies how points are represented in the output contour ModelContours.
Consequently, this parameter only has an effect if the points of the 3D object model are selected for projection
(see above). If ’point_shape’ is set to ’circle’, points are represented by circles, whereas if ’point_shape’ is
set to ’cross’, points are represented by crosses. In both cases the size of the points (i.e., the size of the circles
or the size of the crosses) can be specified by the generic parameter ’point_size’ (see below). The orientation
of the crosses can be specified by the generic parameter ’point_orientation’ (see below).
List of values: ’circle’, ’cross’.
Default: ’circle’.
’point_size’: This parameter specifies the size of the point representation in the output contour ModelContours,
i.e., the size of the circles or the size of the crosses depending on the selected ’point_shape’. Consequently,
this parameter only has an effect if the points of the 3D object model are selected for projection (see above).
The size must be given in pixel units. If ’point_size’ is set to 0, each point is represented by a contour that
contains a single contour point.
Suggested values: 0, 2, 4.
Default: 4.
’point_orientation’: This parameter specifies the orientation of the crosses in radians. Consequently, this parame-
ter only has an effect if the points of the 3D object model are selected for projection and ’point_shape’ is set
to ’cross’ (see above).
Suggested values: 0, 0.39, 0.79.
Default: 0.79.
’union_adjacent_contours’: This parameter specifies if adjacent projected contours should be joined or not. Ac-
tivating this option is equivalent to calling union_adjacent_contours_xld after this operator, but
significantly faster.

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 223

List of values: ’true’, ’false’.


Default: ’true’.
’hidden_surface_removal’: This parameter can be used to switch on or off the removal of hidden surfaces. If
’hidden_surface_removal’ is set to ’true’, only those projected edges are returned that are not hidden by
faces of the 3D object model. If ’hidden_surface_removal’ is set to ’false’, all projected edges are returned.
This is faster than a projection with ’hidden_surface_removal’ set to ’true’.
If the system variable (see set_system) ’opengl_hidden_surface_removal_enable’ is set to ’true’ (which
is the default if it is available) and ’hidden_surface_removal’ is set to ’true’, the projection of the model is
accelerated using the graphics card. Depending on the graphics card this is significantly faster than the non
accelerated algorithm. Be aware that the results of the OpenGL projection are slightly different compared to
the analytic projection. Notable, only the contours visible through CamParam are projected in this mode.
List of values: ’true’, ’false’.
Default: ’true’.
’min_face_angle’: 3D edges are only projected if the angle between the two 3D faces that are incident with
the 3D edge is at least ’min_face_angle’. If ’min_face_angle’ is set to 0.0, all edges are projected. If
’min_face_angle’ is set to π (equivalent to 180 degrees), only the silhouette of the 3D object model is re-
turned. This parameter can be used to suppress edges within curved surfaces, e.g., the surface of a cylinder
or cone.
Suggested values: 0.17, 0.26, 0.35, 0.52.
Default: 0.52.

Attention
Cameras with hypercentric lenses are not supported. For displaying large faces with a non-zero distortion in
CamParam, note that the distortion is only applied to the points of the model. In the projection, these points are
subsequently connected by straight lines. For a good approximation of the distorted lines, please use a triangulation
with sufficiently small triangles.
Parameters
. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Projected model contours.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: Pose == 7
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the generic parameter.
Default: []
List of values: GenParamName ∈ {’true’, ’false’, ’hidden_surface_removal’, ’min_face_angle’, ’data’,
’point_shape’, ’point_size’, ’point_orientation’, ’union_adjacent_contours’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer / real
Value of the generic parameter.
Default: []
Suggested values: GenParamValue ∈ {0.17, 0.26, 0.35, 0.52, ’true’, ’false’, ’auto’, ’points’, ’faces’,
’lines’, ’circle’, ’cross’, 1, 2, 3, 4, 0.785398}
Result
project_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised. If the geometric data that was selected for the projection is not available in the 3D object model, the
error 9514 is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON 24.11.1.0
224 CHAPTER 4 3D OBJECT MODEL

• Processed without parallelization.


Possible Predecessors
read_object_model_3d, affine_trans_object_model_3d, prepare_object_model_3d
Possible Successors
clear_object_model_3d
See also
project_shape_model_3d, object_model_3d_to_xyz
Module
3D Metrology

projective_trans_object_model_3d ( : : ObjectModel3D,
HomMat3D : ObjectModel3DProjectiveTrans )

Apply an arbitrary projective 3D transformation to 3D object models.


projective_trans_object_model_3d applies an arbitrary projective 3D transformation to the points
of 3D object models and returns the handles of the transformed 3D object models. The projec-
tive transformation is described by the homogeneous transformation matrix given in HomMat3D (see
projective_trans_point_3d).
The transformation matrix can be created, e.g., using the operator vector_to_hom_mat3d.
Attention
projective_trans_object_model_3d transforms the attributes of type 3D points. Attributes of type
shape model for shape-based 3D matching, of type 3D primitive, and of type normals are not transformed. There-
fore, these attributes do not exist in the transformed 3D object model. All other attributes are copied without
modification. To transform 3D primitives, the operator rigid_trans_object_model_3d must be used.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the 3D object models.
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d ; real
Homogeneous projective transformation matrix.
. ObjectModel3DProjectiveTrans (output_control) . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handles of the transformed 3D object models.
Result
If the parameters are valid, the operator projective_trans_object_model_3d returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz
See also
affine_trans_point_3d, rigid_trans_object_model_3d,
affine_trans_object_model_3d
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 225

register_object_model_3d_global ( : : ObjectModels3D, HomMats3D,


From, To, GenParamName, GenParamValue : HomMats3DOut, Scores )

Improve the relative transformations between 3D object models based on their overlaps.
register_object_model_3d_global improves the relative transformations between 3D object models,
which is called global registration. In particular, under the assumption that all input 3D objects models in
ObjectModels3D have a known approximated spatial relation, all possible pairwise overlapping areas are calcu-
lated and optimized for a better alignment. The resulting offset is then synchronously minimized for all pairs. The
entire process is then repeated iteratively from the newly resulting starting poses. The result in HomMats3DOut
describes a transformation that can be applied with affine_trans_object_model_3d to the input 3D ob-
ject models to transform all in a common reference frame. Scores contains for every 3D object model the number
of found neighbors with a sufficient overlap. If no overlap is found for at least one object, an exception is raised.
Three types for the interpretation of the starting poses in HomMats3D are available, which is controlled by the
parameters From and To:
First, if From is set to ’global’, the parameter HomMats3D must contain a rigid transformation with 12 entries for
each 3D object model in ObjectModels3D that describes its position in relation to a common global reference
frame. In this case, To must be empty. This case is suitable, e.g., if transformations are applied by a turning table
or a robot to either the camera or the object. In this case, all neighborhoods that are possible are considered for the
global optimization.
Second, if From is set to ’previous’, the parameter HomMats3D must contain a rigid transformation for
each subsequent pair of 3D object models in ObjectModels3D (one less than for the first case). An
example for this situation might be a matching applied consecutively to the previous frame (e.g., with
register_object_model_3d_pair). To must be empty again. In this case, all neighborhoods that are
possible are considered for the global optimization.
Third, you can describe any transformation in HomMats3D by setting From and To to the indices of the 3D
object models for which the corresponding transformation is valid. That is, a given transformation describes the
transformation that is needed to move the 3D object model with the index that is specified in From into the
coordinate system of the 3D object model with the corresponding index that is specified in To. In this case,
HomMats3D should contain all possible neighborhood relations between the objects, since no other than these
neighborhoods are considered for the optimization. Please consider, that for all 3D object models at least one path
of transformations to each other 3D object model must be contained in the such specified transformations.
If ObjectModels3D contains 3D-primitives, they will internally be transformed into point clouds and will be
considered as such.
The accuracy of the returned poses is limited to around 0.1% of the size of the point clouds due to numerical
reasons. The accuracy further depends on the noise of the data points, the number of data points and the shape of
the point clouds.
The process of the global registration can be controlled further by the following generic parameters in
GenParamName and GenParamValue:
’default_parameters’: Allows to choose between two default parameter sets, i.e., it allows to switch between a
’fast’ and an ’accurate’ set of parameters.
List of values: ’fast’, ’accurate’.
Default: ’accurate’.
’rel_sampling_distance’: The relative sampling rate of the 3D object models. This value is relative to the object’s
diameter and refers to the minimal distance between two sampled points. A higher value leads to faster
results, whereas a lower value leads to more accurate results.
Suggested values: 0.03, 0.05, 0.07.
Default: 0.05 (’default_parameters’ = ’accurate’), 0.07 (’default_parameters’ = ’fast’).
Restriction: 0 < ’rel_sampling_distance’ < 1
’pose_ref_sub_sampling’: Number of points that are skipped for the pose refinement. The value specifies the
number of points that are skipped per selected point. Increasing this value allows faster convergence at the
cost of less accurate results. The internally used method for the refinement is asymmetric and this parameter
only affects the second model of each tested pair.
Suggested values: 1, 2, 20.
Default: 2 (’default_parameters’ = ’accurate’), 10 (’default_parameters’ = ’fast’).
Restriction: ’pose_ref_sub_sampling’ > 0

HALCON 24.11.1.0
226 CHAPTER 4 3D OBJECT MODEL

’max_num_iterations’: Number of iterations applied to adjust the initial alignment. The better the initial alignment
is, the less iterations are necessary.
Suggested values: 1, 3, 10.
Default: 3.

Parameters
. ObjectModels3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handles of several 3D object models.
. HomMats3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real / integer
Approximate relative transformations between the 3D object models.
. From (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string / integer
Type of interpretation for the transformations.
Default: ’global’
List of values: From ∈ {’global’, ’previous’, 0, 1, 2, 3, 4}
. To (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Target indices of the transformations if From specifies the source indices, otherwise the parameter must be
empty.
Default: []
List of values: To ∈ {0, 1, 2, 3, 4}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters that can be adjusted for the global 3D object model registration.
Default: []
List of values: GenParamName ∈ {’default_parameters’, ’rel_sampling_distance’,
’pose_ref_sub_sampling’, ’max_num_iterations’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer / string
Values of the generic parameters that can be adjusted for the global 3D object model registration.
Default: []
Suggested values: GenParamValue ∈ {0.03, 0.05, 0.07, 0.1, 0.25, 0.5, 1, 2, 5, 10, 20, ’fast’, ’accurate’}
. HomMats3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real / integer
Resulting Transformations.
. Scores (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Number of overlapping neighbors for each 3D object model.
Result
register_object_model_3d_global returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, register_object_model_3d_pair,
gen_object_model_3d_from_points
Possible Successors
affine_trans_object_model_3d, union_object_model_3d, sample_object_model_3d,
triangulate_object_model_3d
See also
register_object_model_3d_pair, find_surface_model, refine_surface_model_pose
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 227

register_object_model_3d_pair ( : : ObjectModel3D1,
ObjectModel3D2, Method, GenParamName, GenParamValue : Pose,
Score )

Search for a transformation between two 3D object models.


register_object_model_3d_pair searches for a transformation between two 3D object models having an
optimal alignment. This process is called registration. The transformation that is returned in Pose can be used to
transform ObjectModel3D1 to the reference frame of the second object ObjectModel3D2. Score returns
the ratio of the overlapping parts to the not overlapping parts of the two 3D object models. If the two objects are
not overlapping, no pose is returned. The parameter Method decides if the initial relative position is calculated by
’matching’ or if only the pose refinement is performed in relation to the then assumed common global reference
frame, which can be selected directly with ’icp’.
The accuracy of the returned pose is limited to around 0.1% of the size of the point clouds due to numerical reasons.
The accuracy further depends on the noise of the data points, the number of data points and the shape of the point
clouds.
The matching process and the following refinement can be controlled using the following name-value pairs in
GenParamName and GenParamValue:

’default_parameters’: To allow an easy control over the parameters, three different sets of parameters are available.
Selecting the ’fast’ parameter set allows a shorter calculation time. ’accurate’ will give more accurate results.
’robust’ additionally improves the quality of the resulting Score at the cost of calculation time.
List of values: ’fast’, ’accurate’, ’robust’.
Default: ’accurate’.
’rel_sampling_distance’: This parameter controls the relative sampling rate of the 3D object models that is used
to represent the surfaces for the computation. This value is relative to the diameter of the respective object
and defines the minimal distance between two sampled points. A higher value will lead to faster and a
lower value to more accurate results. This parameter can also be set for each object independently by using
’rel_sampling_distance_obj1’ and ’rel_sampling_distance_obj2’.
Suggested values: 0.03, 0.05, 0.07.
Default: 0.05.
’key_point_fraction’: This parameter controls the ratio of sampled points that are considered as key points for the
matching process. The number is relative to the sampled points of the model. Reducing this ratio speeds up
the process, whereas increasing leads to more robust results. This parameter can be also set for each object
independently by using ’key_point_fraction_obj1’ and ’key_point_fraction_obj2’.
Suggested values: 0.2, 0.3, 0.4.
Default: 0.3.
’pose_ref_num_steps’: The number of iterative steps used for the pose refinement.
Suggested values: 5, 7, 10.
Default: 5.
’pose_ref_sub_sampling’: Number of points that are skipped for the pose refinement. The value specifies the
number of points that are skipped per selected point. Increasing this value allows faster convergence at the
cost of less accurate results. This parameter is only relevant for the smaller of the two objects.
Suggested values: 1, 2, 20.
Default: 2.
’pose_ref_dist_threshold_rel’: Maximum distance that two faces might have to be considered as potentially over-
lapping. This value is relative to the diameter of the larger object.
Suggested values: 0.05, 0.1, 0.15.
Default: 0.1.
’pose_ref_dist_threshold_abs’: Maximum distance that two faces might have to be considered as potentially over-
lapping, as absolute value.
’model_invert_normals’: Invert the normals of the smaller object, if its normals are inverted relative to the other
object.
List of values: ’true’, ’false’.
Default: ’false’.

HALCON 24.11.1.0
228 CHAPTER 4 3D OBJECT MODEL

Parameters
. ObjectModel3D1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the first 3D object model.
. ObjectModel3D2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the second 3D object model.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method for the registration.
Default: ’matching’
List of values: Method ∈ {’matching’, ’icp’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’default_parameters’, ’rel_sampling_distance’,
’rel_sampling_distance_obj1’, ’rel_sampling_distance_obj2’, ’key_point_fraction’,
’key_point_fraction_obj1’, ’key_point_fraction_obj2’, ’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’model_invert_normals’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’fast’, ’accurate’, ’robust’, 0.1, 0.25, 0.5, 1, ’true’, ’false’}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose to transform ObjectModel3D1 in the reference frame of ObjectModel3D2.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Overlapping of the two 3D object models.
Example

* Generate two boxes


gen_box_object_model_3d ([0,0,0,0,0,0,0],3,2,1, ObjectModel3D1)
gen_box_object_model_3d ([0,0,0.5,15,0,0,0],3,2,1, ObjectModel3D2)
* Match them
register_object_model_3d_pair (ObjectModel3D1, ObjectModel3D2, 'matching',\
[], [], Pose, Score)

Result
register_object_model_3d_pair returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
read_object_model_3d, gen_object_model_3d_from_points, xyz_to_object_model_3d
Possible Successors
register_object_model_3d_global, affine_trans_object_model_3d,
union_object_model_3d
See also
register_object_model_3d_global, find_surface_model
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 229

render_object_model_3d ( : Image : ObjectModel3D, CamParam, Pose,


GenParamName, GenParamValue : )

Render 3D object models to get an image.


render_object_model_3d renders the 3D object models of ObjectModel3D and returns the result in the
image Image. To setup the scene to display, set CamParam and the individual Pose of the objects. Be aware
that Pose can contain either one pose for each object or one pose for all objects.
The view of the output image is identical to that produced by disp_object_model_3d. The parame-
ters and additional details are documented with disp_object_model_3d, except that the parameters ’ob-
ject_index_persistence’, and ’disp_background’ can not be set.
render_object_model_3d requires OpenGL 2.1, GLSL 1.2, and the OpenGL extensions
GL_EXT_framebuffer_object and GL_EXT_framebuffer_blit. Otherwise the compatibility mode is automatically
enabled. The compatibility mode requires OpenGL 1.1.
Attention
Cameras with hypercentric lenses are not supported.
Parameters
. Image (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image ; object : byte
Rendered scene.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the 3D object models.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Camera parameters of the scene.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D poses of the objects.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’alpha’, ’attribute’, ’color’, ’colored’, ’disp_lines’, ’disp_pose’,
’disp_normals’, ’light_position’, ’line_color’, ’normal_color’, ’quality’, ’compatibility_mode_enable’,
’point_size’, ’color_attrib’, ’color_attrib_start’, ’color_attrib_end’, ’red_channel_attrib’,
’blue_channel_attrib’, ’green_channel_attrib’, ’rgb_channel_attrib_start’, ’rgb_channel_attrib_end’, ’lut’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string / integer / real
Values of the generic parameters.
Default: []
List of values: GenParamValue ∈ {’true’, ’false’, ’coord_x’, ’coord_y’, ’coord_z’, ’normal_x’,
’normal_y’, ’normal_z’, ’red’, ’green’, ’blue’, ’auto’, ’faces’, ’primitive’, ’points’, ’lines’}
Result
render_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: mutually exclusive (runs in parallel with other non-exclusive operators, but not with
itself).
• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
find_surface_model, fit_primitives_object_model_3d, segment_object_model_3d,
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
disp_obj
See also
disp_object_model_3d, project_shape_model_3d, object_model_3d_to_xyz
Module
3D Metrology

HALCON 24.11.1.0
230 CHAPTER 4 3D OBJECT MODEL

rigid_trans_object_model_3d ( : : ObjectModel3D,
Pose : ObjectModel3DRigidTrans )

Apply a rigid 3D transformation to 3D object models.


rigid_trans_object_model_3d applies rigid 3D transformations, i.e., rotations and translations, to 3D
object models and returns the handles of the transformed 3D object models. The transformations are described by
the poses given in Pose, which are in the form cst Pmcsi , where mcsi denotes the coordinate system of the input
object model and cst the coordinate system of the transformed model, e.g., the coordinate system of the scene (see
Transformations / Poses and “Solution Guide III-C - 3D Vision”). A pose can be created using the
operators create_pose, pose_invert, etc., or it can be the result of get_object_model_3d_params.
rigid_trans_object_model_3d transforms one or more 3D object models with the same pose if only one
transformation matrix is passed in Pose (N:1). If a single 3D object model is passed in ObjectModel3D,
it is transformed with all passed poses (1:N). If the number of poses corresponds to the number of 3D object
models, every 3D object model is transformed individually with the respective pose (N:N). In those cases, N
can be zero, i.e., no pose or no 3D object model can be passed to the operator. In this case, an empty tu-
ple is returned in ObjectModel3DRigidTrans. This can be used to, for example, transform the results of
find_surface_model without checking first if at least one match was returned.
Attention
rigid_trans_object_model_3d transforms the attributes of type 3D points, 3D point normals, and the
prepared shape model for shape-based 3D matching, as well as 3D primitives. Precomputed data structures for 3D
distance computation are not copied. All other attributes are copied without modification.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the 3D object models.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Poses.
. ObjectModel3DRigidTrans (output_control) . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handles of the transformed 3D object models.
Result
rigid_trans_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, fit_primitives_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz, get_object_model_3d_params
See also
affine_trans_point_3d, affine_trans_object_model_3d
Module
3D Metrology

sample_object_model_3d ( : : ObjectModel3D, Method, SamplingParam,


GenParamName, GenParamValue : SampledObjectModel3D )

Sample a 3D object model.


sample_object_model_3d creates a sampled version of the 3D object model ObjectModel3D and returns
it in SampledObjectModel3D. Depending on the method used, SamplingParam controls the minimum

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 231

distance or the number of points in SampledObjectModel3D. The created 3D object model is returned in
SampledObjectModel3D.
Using sample_object_model_3d is recommended if complex point clouds are to be thinned out for
faster postprocessing or if primitives are to be converted to point clouds. Note that if the 3D object
model is triangulated and should be simplified by preserving its original geometry as good as possible,
simplify_object_model_3d should be used instead.
If the input object model ObjectModel3D contains only points, several sampling methods are available which
can be selected using the parameter Method:

’fast’: The default method ’fast’ adds all points from the input model which are not closer than SamplingParam
to any point that was earlier added to the output model. If present, normals, XYZ-mapping and extended point
attributes are copied to the output model.
’fast_compute_normals’: The method ’fast_compute_normals’ selects the same points as the method ’fast’, but
additionally calculates the normals for all points that were selected. For this, the input object model must
either contain normals, which are copied, or it must contain a XYZ-mapping attribute from which the normals
are computed. The z-component of the calculated normal vectors is always positive. The XYZ-mapping is
created by xyz_to_object_model_3d.
’accurate’: The method ’accurate’ goes through the points of the 3D object model ObjectModel3D and cal-
culates whether any other points are within a sphere with the radius SamplingParam around the ex-
amined point. If there are no other points, the original point is stored in SampledObjectModel3D.
If there are other points, the center of gravity of these points (including the original point) is stored in
SampledObjectModel3D. This procedure is repeated with the remaining points until there are no points
left. Extended attributes of the input 3D object model are not copied, but normals and XYZ-mapping
are copied. For this method, a noise removal is possible by specifying a value for ’min_num_points’ in
GenParamName and GenParamValue, which removes all interpolated points that had less than the spec-
ified number of neighbor points in the original model.
’accurate_use_normals’: The method ’accurate_use_normals’ requires normals in the input 3D object model and
interpolates only points with similar normals. The similarity depends on the angle between the normals. The
threshold of the angle can be specified in GenParamName and GenParamValue with ’max_angle_diff’.
The default value is 180 degrees. Additionally, outliers can be removed as described in the method ’accurate’,
by setting the generic parameter ’min_num_points’.
’xyz_mapping’: The method ’xyz_mapping’ can only be applied to 3D object models that contain an XYZ-
mapping (for example, if it was created using xyz_to_object_model_3d). This mapping stores for
each 3D point its original image coordinates. The method ’xyz_mapping’ subdivides those original images
into squares with side length SamplingParam (which is given in pixel) and selects one 3D point per square.
The method behaves similar to applying zoom_image_factor onto the original XYZ-images. Note that
this method does not use the 3D-coordinates of the points for the point selection, only their 2D image coor-
dinates.
It is important to notice that for this method, the parameter SamplingParam corresponds to a distance in
pixels, not to a distance in 3D space.
’xyz_mapping_compute_normals’: The method ’xyz_mapping_compute_normals’ selects the same points as the
method ’xyz_mapping’, but additionally calculates the normals for all points that were selected. The z-
component of the normal vectors is always positive. If the input object model contains normals, those normals
are copied to the output. Otherwise, the normals are computed based on the XYZ-mapping.
’furthest_point’: The method ’furthest_point’ iteratively adds the point of the input object to the output object that
is furthest from all points already added to the output model. This usually leads to a reasonably uniform
sampling. For this method, the desired number of points in the output model is passed in SamplingParam.
If that number exceeds the number of points in the input object, then all points of the input object are returned.
The first point added to the output object is the point that is furthest away from the center of the axis aligned
bounding box around the points of the input object.
’furthest_point_compute_normals’: The method ’furthest_point_compute_normals’ selects the same points as the
method ’furthest_point’, but additionally calculates the normals for all points that were selected. The number
of desired points in the output object is passed in SamplingParam.
To compute the normals, the input object model must either contain normals, which are copied, or it must
contain a XYZ-mapping attribute from which the normals are computed. The z-component of the calculated
normal vectors is always positive. The XYZ-mapping is created by xyz_to_object_model_3d.

HALCON 24.11.1.0
232 CHAPTER 4 3D OBJECT MODEL

If the input object model contains faces (triangles or polygons) or is a 3D primitive, the surface is sampled with the
given distance. In this case, the method specified in Method is ignored. The directions of the computed normals
depend on the face orientation of the model. Usually, the orientation of the faces does not vary within one CAD
model, which results in a set of normals that is either pointing inwards or outwards. Note that planes and cylinders
must have finite extent. If the input object model contains lines, the lines are sampled with the given distance
SamplingParam.
The sampling process approximates surfaces by creating new points in the output object model. Therefore, any
extended attributes from the input object model are discarded.
For mixed input object models, the sampling priority is (from top to bottom) faces, lines, primitives and points,
i.e., only the objects of the highest priority are sampled.
The parameter SamplingParam accepts either one value, which is then used for all 3D object models passed in
ObjectModel3D, or one value per input object model. If SamplingParam is a distance in 3D space the unit
is the usual HALCON-internal unit ’m’.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model to be sampled.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selects between the different subsampling methods.
Default: ’fast’
List of values: Method ∈ {’fast’, ’fast_compute_normals’, ’accurate’, ’accurate_use_normals’,
’xyz_mapping’, ’xyz_mapping_compute_normals’, ’furthest_point’, ’furthest_point_compute_normals’}
. SamplingParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Sampling distance or number of points.
Number of elements: SamplingParam == 1 || SamplingParam == ObjectModel3D
Default: 0.05
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters that can be adjusted.
Default: []
List of values: GenParamName ∈ {’min_num_points’, ’max_angle_diff’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer / string
Values of the generic parameters that can be adjusted.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 5, 10, 20, 0.1, 0.25, 0.5}
. SampledObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model that contains the sampled points.
Number of elements: SampledObjectModel3D == ObjectModel3D
Example

gen_box_object_model_3d ([0,0,0,0,0,0,0],3,2,1, ObjectModel3D)


sample_object_model_3d (ObjectModel3D, 'fast', 0.05, [], [], \
SampledObjectModel3D)
dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, SampledObjectModel3D, \
[], [], [], [], [], [], [], PoseOut)

Result
sample_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 233

Possible Predecessors
read_object_model_3d, gen_plane_object_model_3d, gen_sphere_object_model_3d,
gen_cylinder_object_model_3d, gen_box_object_model_3d,
gen_sphere_object_model_3d_center, xyz_to_object_model_3d
Possible Successors
get_object_model_3d_params, clear_object_model_3d
Alternatives
simplify_object_model_3d, smooth_object_model_3d
Module
3D Metrology

simplify_object_model_3d ( : : ObjectModel3D, Method, Amount,


GenParamName, GenParamValue : SimplifiedObjectModel3D )

Simplify a triangulated 3D object model.


simplify_object_model_3d simplifies the triangulated 3D object model ObjectModel3D by re-
moving model points and returns the result in SimplifiedObjectModel3D. Note that in contrast to
sample_object_model_3d points are removed such that the original geometry of the object model is repre-
sented as good as possible. Typically, this means that edges are preserved while the point density within smooth
parts is reduced. This might be helpful, for example, to speed up subsequent operator calls by using a 3D object
model of reduced complexity.
The triangulation of the input 3D object model is preserved as opposed to the operator
sample_object_model_3d, which samples surfaces to equidistant unconnected 3D points.
Currently, the operator offers only a single simplification method (’preserve_point_coordinates’), which can be set
in Method. This method ensures that the points in the simplified object model SimplifiedObjectModel3D
have the same coordinates as the respective points in the input object model ObjectModel3D.
simplify_object_model_3d only works for triangulated object models. Whether an object model contains
a triangulation can be queried with get_object_model_3d_params (GenParamName=’has_triangles’).
Object models that do not contain a triangulation must be triangulated beforehand, e.g., by using
triangulate_object_model_3d or prepare_object_model_3d (Purpose=’segmentation’).
The degree of simplification can be set with Amount. By default, Amount specifies the percentage of points of
the input object model that should be contained in the output object model. Thus, the smaller the value of Amount
in this case is chosen the stronger the object model will be simplified.
Alternatively, the meaning of the parameter Amount can be modified. For this, the generic parameter
’amount_type’ can be set to one of the following values:

’percentage_remaining’ (default): Amount specifies the percentage of points of the input object model that
should be contained in the output object model.
Value range: [0.0 ... 100.0].
’percentage_to_remove’: Amount specifies the percentage of points of the input object model that should be
removed.
Value range: [0.0 ... 100.0].
’num_points_remaining’: Amount specifies the number of points of the input object model that should be con-
tained in the output object model.
Value range: [0 ... number of points in the input object model].
’num_points_to_remove’: Amount specifies the number of points of the input object model that should be re-
moved.
Value range: [0 ... number of points in the input object model].

Sometimes triangular meshes flip during the simplification, i.e., the direction of their normal vectors changes by
180 degrees. This especially happens for artificially created CAD models that consist of planar parts. To avoid this
flipping, the generic parameter ’avoid_triangle_flips’ can be set to ’true’ (the default is ’false’). Note that in this
case, the run-time of simplify_object_model_3d will increase.

HALCON 24.11.1.0
234 CHAPTER 4 3D OBJECT MODEL

Note that multiple calls of simplify_object_model_3d with a lower degree of simplification might re-
sult in a different simplified object model compared to a single call with a higher degree of simplification.
Also note that isolated (i.e., non-triangulated) points will be removed. This might result in a number of points
in SimplifiedObjectModel3D that slightly deviates from the degree of simplification that is specified in
Amount.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model that should be simplified.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method that should be used for simplification.
Default: ’preserve_point_coordinates’
List of values: Method ∈ {’preserve_point_coordinates’}
. Amount (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Degree of simplification (default: percentage of remaining model points).
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’amount_type’, ’avoid_triangle_flips’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string / real
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’percentage_remaining’, ’percentage_to_remove’,
’num_points_remaining’, ’num_points_to_remove’, ’true’, ’false’}
. SimplifiedObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the simplified 3D object model.
Example

read_object_model_3d ('mvtec_bunny.om3', 'm', [], [], ObjectModel3D, Status)


visualize_object_model_3d (WindowHandle, ObjectModel3D, [], [], [], [], \
[], [], [], Pose)
simplify_object_model_3d (ObjectModel3D, 'preserve_point_coordinates', \
5.0, 'amount_type', 'percentage_remaining', \
SimplifiedObjectModel3D)
visualize_object_model_3d (WindowHandle, SimplifiedObjectModel3D, [], \
Pose, [], [], [], [], [], Pose)

Result
simplify_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an ex-
ception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
prepare_object_model_3d, read_object_model_3d, triangulate_object_model_3d,
xyz_to_object_model_3d
Possible Successors
disp_object_model_3d, smallest_bounding_box_object_model_3d
Alternatives
sample_object_model_3d, smooth_object_model_3d
References
Michael Garland, Paul S. Heckbert: Surface Simplification Using Quadric Error Metrics, Proceedings of the 24th

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 235

Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’97), 209-216, ACM Press,
1997
Module
3D Metrology

smooth_object_model_3d ( : : ObjectModel3D, Method, GenParamName,


GenParamValue : SmoothObjectModel3D )

Smooth the 3D points of a 3D object model.


The operator smooth_object_model_3d smoothes the 3D points in ObjectModel3D and returns the
smoothed points in SmoothObjectModel3D. Currently, the operator offers three methods for smoothing that
can be selected in Method: ’mls’, ’xyz_mapping’ and ’xyz_mapping_compute_normals’. ’mls’ applies a Mov-
ing Least Squares (MLS) algorithm on the 3D points. As a side effect of the smoothing, the method extends
SmoothObjectModel3D by corresponding normals. ’xyz_mapping’ smoothes the coordinates of the 3D points
using a 2D filter and the 2D mapping contained in ObjectModel3D. ’xyz_mapping_compute_normals’ applies
the same smoothing as ’xyz_mapping’, but additionally extends SmoothObjectModel3D by normals.
Additional parameters can be set with GenParamName and GenParamValue. The parameter names settable
for Method=’mls’ use the prefix ’mls’. Analogically, parameter names settable for Method=’xyz_mapping’ and
Method=’xyz_mapping_compute_normals’ use the prefix ’xyz_mapping’.
MLS smoothing
By selecting Method=’mls’, for each point P , the MLS smoothing algorithm fits a planar surface or a higher
order polynomial surface to its k-neighborhood (the k nearest points). The surface fitting is essentially a standard
weighted least squares parameter estimation of the plane or polynomial surface parameters, respectively. The clos-
est neighbors of P have higher contribution than the other points, which is controlled by the following weighting
function with a parameter σ:

kP 0 − P k2
 
w(P 0 ) = exp −
σ2

The point is then projected on the surface. This process is repeated for all points resulting in a smoothed point
set. The fitted surfaces have well defined normals (i.e., they can easily be computed from the surface parameters).
Therefore, the points are augmented by the corresponding normals as side effect of the smoothing.
Additional parameters can be adjusted for the MLS smoothing specifically using the following parameter names
and values for GenParamName and GenParamValue:

’mls_kNN’: Specify the number of nearest neighbors k that are used to fit the MLS surface to each point.
Suggested values: 40, 60, 80, 100, 400.
Default: 60.
’mls_order’: Specify the order of the MLS polynomial surface. For ’mls_order’=1 the surface is a plane.
Suggested values: 1, 2,3.
Default: 2.
’mls_abs_sigma’: Specify the weighting parameter σ as a fixed absolute value in meter. The value to be selected
depends on the scale of the point data. As a rule of thumb, σ can be selected to be the typical distance
between a point P and its k/2-th neighbor Pk/2 . Note that setting an absolute weighting parameter for point
data with varying density might result in different smoothing results for points that are situated in parts of the
point data with different densities. This problem can be avoided by using ’mls_relative_sigma’ instead that is
scale independent, which makes it also a more convenient way to specify the neighborhood weighting. Note
that if ’mls_abs_sigma’ is passed, any value set in ’mls_relative_sigma’ is ignored.
Suggested values: 0.0001, 0.001, 0.01, 0.1, 1.0.
’mls_relative_sigma’: Specify a multiplication factor σrel that is used to compute σP for a point P by the formula:

σP = σrel kPk/2 − P k,

where Pk/2 is the k/2-th neighbor of P . Note that, unlike σ, which is a global parameter for all points, σP
is computed for each point P and therefore adapts the weighting function to its neighborhood. This avoids

HALCON 24.11.1.0
236 CHAPTER 4 3D OBJECT MODEL

problems that might appear while trying to set a global parameter σ (’mls_abs_sigma’) to a point data with
highly varying point density. Note however that if ’mls_abs_sigma’ is set, ’mls_relative_sigma’ is ignored.
Suggested values: 0.1, 0.5, 1.0, 1.5, 2.0.
Default: 1.0.
’mls_force_inwards’: If this parameter is set to ’true’, all surface normals are oriented such that they point “in
the direction of the origin”. Expressed mathematically, it is ensured that the scalar product between the
normal vector and the vector from the respective surface point to the origin is positive. This may be nec-
essary if the resulting SmoothObjectModel3D is used for surface-based matching, either as model in
create_surface_model or as 3D scene in find_surface_model, because here, the consistent orientation
of the normals is important for the matching process. If ’mls_force_inwards’ is set to ’false’, the normal
vectors are oriented arbitrarily.
List of values: ’true’, ’false’.
Default: ’true’.

2D mapping smoothing
By selecting Method=’xyz_mapping’ or Method=’xyz_mapping_compute_normals’, the coordinates of the 3D
points are smoothed using a 2D filter and the 2D mapping contained in ObjectModel3D. Additionally, for
Method=’xyz_mapping_compute_normals’, SmoothObjectModel3D is extended by normals computed from
the XYZ-mapping. If no 2D mapping is available, an exception is raised. As the filter operates on the 2D depth
image, using Method=’xyz_mapping’ or Method=’xyz_mapping_compute_normals’ is usually faster than using
Method=’mls’. Invalid points (e.g., duplicated points with coordinates [0,0,0]) should be removed from the 3D
object model before applying the operator, e.g., by using select_points_object_model_3d with attribute
’point_coord_z’ or ’num_neighbors_fast X’.
Additional parameters can be adjusted for the 2D mapping smoothing specifically using the following parameter
names and values for GenParamName and GenParamValue:

’xyz_mapping_filter’: Specify the filter used for smoothing the 2D mapping. The sizes of the corresponding filter
mask are set with ’xyz_mapping_mask_width’ and ’xyz_mapping_mask_height’.
In the default filter mode ’median_separate’, the filter method used on the 2D image is comparable to
median_separate. This mode is usually faster than ’median’, but can also lead to less accurate results
or artifacts at surface discontinuities.
Using filter mode ’median’, the used filter method is comparable to median_image.
List of values: ’median_separate’, ’median’.
Default: ’median_separate’.
’xyz_mapping_mask_width’, ’xyz_mapping_mask_height’: Specify the width and height of the used filter mask.
For ’xyz_mapping_filter’=’median_separate’ or ’xyz_mapping_filter’=’median’, even values for
’xyz_mapping_mask_width’ or ’xyz_mapping_mask_height’ are increased to the next odd value auto-
matically.
For ’xyz_mapping_filter’=’median’, the used filter mask must be quadratic (’xyz_mapping_mask_width’
= ’xyz_mapping_mask_height’). Thus, when setting only ’xyz_mapping_mask_width’ or
’xyz_mapping_mask_height’, the other parameter is set to the same value automatically. If two differ-
ent values are set, an error is raised.
Suggested values: 3, 5, 7, 9.
Default: 3.

Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model containing 3D point data.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Smoothing method.
Default: ’mls’
List of values: Method ∈ {’mls’, ’xyz_mapping’, ’xyz_mapping_compute_normals’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of generic smoothing parameters.
Default: []
List of values: GenParamName ∈ {’mls_kNN’, ’mls_order’, ’mls_abs_sigma’, ’mls_relative_sigma’,
’mls_force_inwards’, ’xyz_mapping_filter’, ’xyz_mapping_mask_width’, ’xyz_mapping_mask_height’}

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 237

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string


Values of generic smoothing parameters.
Default: []
Suggested values: GenParamValue ∈ {10, 20, 40, 60, 0.1, 0.5, 1.0, 2.0, 0, 1, 2, 3, 5, 7, 9}
. SmoothObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model with the smoothed 3D point data.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Alternatives
surface_normals_object_model_3d, sample_object_model_3d,
simplify_object_model_3d
Module
3D Metrology

surface_normals_object_model_3d ( : : ObjectModel3D, Method,


GenParamName, GenParamValue : ObjectModel3DNormals )

Calculate the 3D surface normals of a 3D object model.


The operator surface_normals_object_model_3d calculates the 3D surface normals for the object
ObjectModel3D using the method specified by Method. The calculated normals are appended to the input
object and the resulting object is returned in ObjectModel3DNormals.
For Method ’mls’, the normals estimation method Moving Least Squares (MLS) is applied. The MLS method
for normals estimation is essentially identical with the MLS method used by smooth_object_model_3d
with the exception that in surface_normals_object_model_3d the 3D points are not smoothed, i.e., the
original 3D points of ObjectModel3D remain unchanged. For more details on the MLS as well as a full list and
descriptions of the supported MLS parameters refer to smooth_object_model_3d.
If the object ObjectModel3D contains triangles, the Method ’triangles’ can be used to obtain point normals
from the normals of the triangles neighboring a point. The normals of the neighboring triangles are weighted
according to the angle which the triangle encloses at the point. The triangle normals are returned in the extended
attributes ’&triangle_normal_x’, ’&triangle_normal_y’ and ’&triangle_normal_z’. If the extended attributes al-
ready exist, they will not be overwritten.
If the object ObjectModel3D contains a 2D mapping (for example a 3D object model that was created with
xyz_to_object_model_3d), the Method ’xyz_mapping’ can be used to obtain point normals from the neigh-
borhood of the points in the 2D mapping. In an 11x11 neighborhood of the points in the 2D mapping, a plane is
fit through the corresponding 3D points. The normal of this plane then gets switched in a direction consistent with
the 2D mapping, for example along the viewing direction of the sensor or in the opposite direction.
Note that for points where the normal vector cannot be estimated, it is set to the zero vector. This happens, for
example, if the 3D object model contains an identical point more than ’mls_kNN’ times.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model containing 3D point data.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Normals calculation method.
Default: ’mls’
List of values: Method ∈ {’mls’, ’triangles’, ’xyz_mapping’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of generic smoothing parameters.
Default: []
List of values: GenParamName ∈ {’mls_kNN’, ’mls_order’, ’mls_abs_sigma’, ’mls_relative_sigma’,
’mls_force_inwards’}

HALCON 24.11.1.0
238 CHAPTER 4 3D OBJECT MODEL

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string


Values of generic smoothing parameters.
Default: []
Suggested values: GenParamValue ∈ {10, 20, 40, 60, 0.1, 0.5, 1.0, 2.0, 0, 1, 2, ’true’, ’false’}
. ObjectModel3DNormals (output_control) . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model with calculated 3D normals.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
sample_object_model_3d
Possible Successors
create_surface_model, fuse_object_model_3d
Alternatives
smooth_object_model_3d
Module
3D Metrology

triangulate_object_model_3d ( : : ObjectModel3D, Method,


GenParamName, GenParamValue : TriangulatedObjectModel3D,
Information )

Create a surface triangulation for a 3D object model.


The operator triangulate_object_model_3d generates a surface of triangular faces for the 3D object
model ObjectModel3D and returns the resulting surface in TriangulatedObjectModel3D. Currently,
the operator offers four methods for the triangulation that can be selected in Method: ’polygon_triangulation’,
’xyz_mapping’, ’greedy’ and ’implicit’. ’polygon_triangulation’ is a simple method for the conversion of a polygo-
nal to a triangular face representation in a 3D object model. ’xyz_mapping’ triangulates the points in 2D according
to a 2D mapping. The other two methods are rather complex algorithms that are used to calculate triangular faces
from pure 3D point data with unknown surface topology. A detailed comparison of the ’greedy’ and ’implicit’
algorithm is provided in the paragraph "Comparison of the triangulation methods" below.
Polygon triangulation
By selecting Method=’polygon_triangulation’, all polygons in ObjectModel3D are triangulated. No generic
parameters are supported for this method. If no polygons are available, an exception is raised. A triangular mesh
representing the same surface as ObjectModel3D is returned in TriangulatedObjectModel3D.
2D mapping triangulation
By selecting Method=’xyz_mapping’, the points are triangulated in 2D according to a 2D mapping con-
tained in ObjectModel3D. The used method is the same as in prepare_object_model_3d for Pur-
pose=’segmentation’. If no 2D mapping is available, an exception is raised.
As a post-processing step, triangles whose normal differs strongly from a specified direction can be removed,
refer to the description of GenParamName below. This is helpful in cases where the 2D neighborhood used for
the triangulation does not reflect the 3D neighborhood well, e.g., when parts of the surface are hidden along the
viewing direction of the sensor, or to remove typical noise along the viewing direction of the sensor.
By setting GenParamName to the following value, the additional parameter specific for the 2D mapping triangu-
lation can be set with GenParamValue:

’xyz_mapping_max_area_holes’ specifies which area holes of the point coordinates are closed during a simple
Delaunay triangulation. Only holes which are completely surrounded by the image region are closed. If
’xyz_mapping_max_area_holes’ is set to 0, no holes are triangulated. The parameter corresponds to the
GenParamName ’max_area_holes’ of prepare_object_model_3d.

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 239

Suggested values: 1, 10, 100.


Default: 10.
’xyz_mapping_max_view_angle’ specifies the maximum allowed angle difference between the triangle nor-
mal and the viewing direction of the sensor. The smaller this value is set, the fewer trian-
gles are returned. The viewing direction of the sensor is assumed to be the z-axis in the co-
ordinate system of ObjectModel3D if not specified differently using GenParamName set to
’xyz_mapping_max_view_dir_x’, ’xyz_mapping_max_view_dir_y’, and ’xyz_mapping_max_view_dir_z’.
The angle has to be specified between 0 and 90 degrees.
Suggested values: ’rad(60)’, ’rad(85)’, ’rad(90)’.
Default: ’rad(90)’.
’xyz_mapping_max_view_dir_x’, ’xyz_mapping_max_view_dir_y’, ’xyz_mapping_max_view_dir_z’ specify the
viewing direction of the sensor for use with the GenParamName ’xyz_mapping_max_view_angle’, in the
coordinate system of ObjectModel3D. If not all three coordinate directions are set simultaneously or the
direction equals the zero vector, an exception is raised.
Suggested values: [1, 0, 0], [0, 0, 1].
Default: [0, 0, 1].
’xyz_mapping_output_all_points’ controls, if all input points are returned, regardless whether they were used
in the output triangulation or not. Mainly provided for reasons of backward compatibility. When
’xyz_mapping_output_all_points’ is set to ’false’, the old point indices are stored as an extended attribute
named ’original_point_indices’ in the 3D object model TriangulatedObjectModel3D. This attribute
can subsequently be queried with get_object_model_3d_params or be processed with other opera-
tors that use extended attributes.
List of values: ’false’, ’true’.
Default: ’false’.

(1)

(2)
(1) In order to triangulate the 3D object model, a 2D mapping of the model is used. The triangulation is based
on the respective 2D neighborhood. Thereby it is possible, that unwanted triangles are created along the
sensor’s direction of view, e.g., because of hidden object parts or clutter data. (2) Whether or not a triangle is

HALCON 24.11.1.0
240 CHAPTER 4 3D OBJECT MODEL

returned, is decided by computing the difference between the normal direction of each triangle and the
viewing direction. The maximum deviation is specified in ’xyz_mapping_max_view_angle’.

Greedy triangulation
By selecting Method=’greedy’, a so called greedy triangulation algorithm is invoked. It requires 3D point data
containing normals. If ObjectModel3D does not contain the normals, they are calculated internally, in an
identical manner to calling surface_normals_object_model_3d with its default parameters before trian-
gulation. The algorithm constructs a surface, which passes through the points and whose surface normals must
be conform to the corresponding point normals up to a given tolerance. The surface is represented by triangular
faces, which are constructed from triplets of neighboring points. In order to determine which triplets qualify for a
surface triangle, the algorithm applies for each point pair the following local neighborhood test, denoted as surface
neighborhood criteria (SNC):
If a point P is lying on a surface, with N being the orientation (normal) of the surface, then a point P 0 with normal
N 0 is considered to lay on this surface if:

1. the distance between both points is smaller or equal to r, i.e., ∆(P, P 0 ) ≤ r


2. both normals have similar orientation, i.e., the angle 6 (N, N 0 ) ≤ α or - if no strict consistency of the normals
is enforced - 6 (N, −N 0 ) ≤ α
3. the vector δP = P 0 − P is close to orthogonal with respect to N , i.e., the angle |90◦ − 6 (N, δP )| ≤ β
4. if P 0 does not meet 3. but it is not further away from the plane defined by [P, N ] than d, then it is accepted
as well.

The four parameters r (see ’greedy_radius_type’ and ’greedy_radius_value’), α (see ’greedy_neigh_orient_tol’),


β (see ’greedy_neigh_latitude_tol’), and d (see ’greedy_neigh_vertical_tol’) control the criteria and have the fol-
lowing meaning:
The parameter α essentially controls the curvature of the generated surface: for small values of α the generated
surface will be locally flatter; larger values of α permit the generation of more curved surface fragments.
The other three parameters define a portion of a sphere that defines the valid SNC neighborhood. The sphere has a
radius r, it is centered in P , and its equatorial plane is incident with the plane [P, N ]. Only points that are within
the sphere (first SNC criteria) are considered. Furthermore, they need to have a latitude within [-β; β] (third SNC
criteria) with respect to the equator unless they are lying within the thin layer defined on the both sides of the
equatorial plane by the distance parameter d (fourth SNC criteria). In contrast, points lying in any of both pole
segments of the sphere (i.e., with higher latitude than β and a distance from the equatorial plane beyond d) are not
considered as neighbors.
The parameter r prevents the algorithm from constructing too big triangles. This is particularly important for point
sets that represent several disconnected surface pieces or a surface with holes that must not be closed. The latitude
window defined by β enables neighbors which deviate from [P, N ] due to noise or curvature to be considered as
well. Similarly, the parameter d enables neighbors right "above" or "below" the equatorial plane to be accepted,
which essentially accounts for data noise.
Here is some advice for selecting the appropriate values for these parameters:

• If the resulting surface triangulation looks very disconnected or exhibits many holes, this might be a hint that
r is too small and thus restricts the generation of triangles that are large enough to close the holes. Try to
increase r.
• If the normals data is noisy (i.e., neighboring normals are deviating to a large extend from each other), then
increase α. The source of noisy normals is typically caused either by the sensor, which delivers both the
point and the normals data, or an imprecise normals estimation routine, which computes the normals from
the point data.
• If the point data represents a very curved surface, i.e., it exhibits a very fine structure like, e.g., little buckles,
fine waves or folds, or sharp turns, then make sure the generation of curved data is facilitated by an increasing
α and/or β.
• In contrast, if the data is rather planar but has lots of outliers (i.e., points laying next to the surface, which
have completely different orientations and thus most probably do not belong to it), then decrease α to exclude
them from the surface generation.
• If the point data is very noisy and resembles more a crust than a single-layer surface, then increase β and/or
d to make sure that neighbors for P can still be found even if they are further away from the optimal plane
[P, N ].

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 241

• In contrast, if the data is rather noise-free, but two surfaces are running close to each other and are nearly
parallel, e.g., surfaces representing the front and the back side of a thin, plate-like object, then decrease β and
d to avoid interference between the surfaces.

The greedy triangulation algorithm starts by initializing a surface with one triangle constructed from three SNC-
eligible, neighboring points. If all valid neighborhoods show local inconsistencies like collinear or ’double’ points,
an error will be raised. A prior call of sample_object_model_3d with Method set to ’fast’ and a small
SamplingParam will remove most local inconsistencies from ObjectModel3D. Having found one triangle, the
algorithm then greedily constructs new triangles as long as further points can be reached by the SNC rules from
any point on the surface boundaries. If no points can be reached from the current surface, but there are unprocessed
points in the 3D object model, a new surface is initialized. Because the SNC rules are essentially defined only in
the small local neighborhoods of the points, the resulting surface can have global topological artifacts like holes
and flips. The latter occur, when - while it is growing - a surface meets itself but with inverted face orientations (i.e.,
the surface was flipped somewhere while it was growing). These artifacts are handled in special post-processing
steps: hole filling and flip resolving, respectively.
Finally, a mesh morphology can be performed to additionally remove artifacts that occurred on the final surface
boundaries. The mesh morphology consists of several mesh erosion cycles and several subsequent mesh dilation
cycles. With each erosion cycle, all triangles reachable from the surface boundaries are removed and the surface
boundaries shrink. Then, with each dilation cycle all triangles reachable from the surface boundaries are appended
again to the surface and the boundaries expand. Note that this is only possible for triangles, which were removed by
an erosion cycle before that. Therefore, once the original boundaries of the surface (i.e., those which existed before
the mesh erosion cycles) are reached, the dilation cannot advance any further and hence the dilation cycles cannot
be more than the erosion cycles. Applying mesh erosion and dilation subsequently is analogous to performing
opening to standard HALCON regions. At last, the mesh morphology can delete surface pieces which have too
few triangles.
The individual algorithm steps are summarized here:

1. Triangulation of all points reachable by SNC


2. Hole filling (see ’greedy_hole_filling’)
3. Flip resolving (see ’greedy_fix_flips’)
4. Mesh morphology (see ’greedy_mesh_erosion’, ’greedy_mesh_dilation’, and
’greedy_remove_small_surfaces’)

By setting GenParamName to one of the following values, additional parameters specific for the greedy triangu-
lation can be set with GenParamValue:

’greedy_kNN’ specifies the size k of the neighborhood. While looking for reachable SNC neighbors for a surface
boundary point, the algorithm considers only its closest k neighbors.
Suggested values: 20, 30, 40, 50, 60.
Default: 40.
’greedy_radius_type’: if set to ’fixed’,’greedy_radius_value’ specifies the SNC radius r in meter units.
If set to ’z_factor’, r is calculated for each point P by multiplying its z-coordinate by the value specified
by ’greedy_radius_value’. This representation of r is appropriate for data where the density of the points
correlates with their distance from the sensor they were recorded with. This is typically the case with depth
sensors or TOF cameras.
If set to ’auto’, the algorithm determines internally whether to use a ’fixed’ or a ’z_factor’ radius and estimates
its value. The estimated value is then multiplied by the value specified in ’greedy_radius_value’. This way,
the user specifies a scale factor for the estimated radius.
List of values: ’auto’, ’fixed’, ’z_factor’.
Default: ’auto’.
’greedy_radius_value’: see ’greedy_radius_type’.
Suggested values: 0.01, 0.05, 0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0
’greedy_neigh_orient_tol’: sets the SNC parameter α in degree units. α controls the surface curvature as described
with the SNC rules above.
Suggested values: 10, 20, 30, 40.
Default: 30.

HALCON 24.11.1.0
242 CHAPTER 4 3D OBJECT MODEL

’greedy_neigh_orient_consistent’: enforces that the normals of two neighboring points have the same orientation
(i.e., they do not show in opposite directions). If enabled, this parameter disables the second part of the SNC
criteria for α, i.e., if 6 (N, N 0 ) > α, the criteria fails even if 6 (N, −N 0 ) ≤ α.
List of values: ’true’, ’false’.
Default: ’false’.
’greedy_neigh_latitude_tol’: sets the SNC parameter β in degree units. β controls the surface neighborhood
latitude window as described with the SNC rules above.
Suggested values: 10, 20, 30 40.
Default: 30.
’greedy_neigh_vertical_tol’: sets the SNC parameter d as a factor of the radius r.
Suggested values: 0.01, 0.1, 0.2, 0.3.
Default: 0.1.
’greedy_hole_filling’: sets the length of surface boundaries (in number of point vertices) that should be considered
for the hole filling. If ’false’ is specified, then the hole filling step is disabled.
Suggested values: ’false’, 20, 40, 60.
Default: 40.
’greedy_fix_flips’: enables/disables the flip resolving step of the algorithm.
List of values: ’true’, ’false’.
Default: ’true’.
’greedy_prefetch_neighbors’: enables/disables prefetching of lists of the k nearest neighbors for all points. This
prefetching improves the algorithm speed, but has high memory requirements (O(kn), where k is the number
specified by ’greedy_kNN’, and n is the number of points in ObjectModel3D). For very large data, it might
be impossible to preallocate such a big amount of memory, results in a memory error message. In such a case
the prefetching must be disabled.
List of values: ’true’, ’false’ Default: ’true’.
’greedy_mesh_erosion’: specifies the number of erosion cycles applied to the final mesh.
Suggested values: 0, 1, 2, 3.
Default: 0.
’greedy_mesh_dilation’: specifies the number of dilation cycles. The mesh dilation is applied after the mesh
erosion. If ’greedy_mesh_dilation’ is set to a greater value than ’greedy_mesh_erosion’, it will be reduced
internally to the value of ’greedy_mesh_erosion’.
Suggested values: 0, 1, 2, 3 Default: 0.
’greedy_remove_small_surfaces’: controls the criteria for removing small surface pieces. If set to ’false’, the
small surface removal is disabled. If set to a value between 0.0 and 1.0, all surfaces having less triangles
than ’greedy_remove_small_surfaces’×num_triangles will be removed, where num_triangles is
the total number of triangles generated by the algorithm. If set to a value greater than 1, all surfaces having
less triangles than ’greedy_remove_small_surfaces’ will be removed.
Suggested values: ’false’, 0.01, 0.05, 0.1, 10, 100, 1000, 10000.
Default: ’false’.
’greedy_timeout’: using a timeout, it is possible to interrupt the operator after a defined period of time in seconds.
This is especially useful in cases, where a maximum cycle time has to be ensured. The temporal accuracy of
this interrupt is about 10 ms. Passing values less then zero is not valid. Setting ’greedy_timeout’ to ’false’
deactivates the timeout, which corresponds to the default. The temporal accuracy depends on several factors
including the size of the model, the speed of your computer, and the ’timer_mode’ set via set_system.
Suggested values: ’false’, 0.1, 0.5, 1, 10, 100.
Default: ’false’.
’greedy_suppress_timeout_error’: by default, if a timeout occurs the operator returns a timeout error code. By
setting ’greedy_suppress_timeout_error’ to ’true’ instead, the operator returns no error and the intermediate
results of the triangulation are returned in TriangulatedObjectModel3D. With the error suppressed,
the occurrence of a timeout can be checked by querying the list of values returned in Information (in
’verbose’ mode) by looking for the value corresponding to ’timeout_occured’.
List of values: ’false’, ’true’.
Default: ’false’.

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 243

’greedy_output_all_points’: controls, if all input points are returned, regardless whether they were used
in the output triangulation or not. Mainly provided for reasons of backward compatibility. When
’greedy_output_all_points’ is set to ’false’, the old point indices are stored as an extended attribute named
’original_point_indices’ in the 3D object model TriangulatedObjectModel3D. This attribute can sub-
sequently be queried with get_object_model_3d_params or be processed with other operators that
use extended attributes.
List of values: ’false’, ’true’.
Default: ’false’.
’information’: specifies, which intermediate results shall be reported in Information. By default (’informa-
tion’=’num_triangles’), the number of generated triangles is reported. For ’information’=’verbose’, a list of
name-value information pairs is returned. Currently, the following information is reported:

HALCON 24.11.1.0
244 CHAPTER 4 3D OBJECT MODEL

Name Value Description


’num_triangles’ <number of triangles> returns the number of generated trian-
gular faces.
’specified_radius_type’ ’auto’ | ’fixed’ | ’z_factor’ | ’none’ returns the radius type as specified by
the user.
’specified_radius_value’ <specified radius value> returns the radius value specified by the
user.
’used_radius_type’ ’fixed’ | ’z_factor’ | ’sampling’ returns the radius type used internally;
if the user specified ’auto’ for ’speci-
fied_radius_type’, this field returns the
radius type that was selected internally;
if ObjectModel3D is a 3D primi-
tive, the user specified radius value is
internally used as a sampling step and
’used_radius_type’ returns ’sampling’.
’used_radius_value’ <used radius value> returns the radius value used internally;
if ’used_radius_type’=’fixed’, the abso-
lute neighborhood radius in meters is re-
ported;
if ’used_radius_type’=’z_factor’, the
multiplication factor is reported, which
is used to compute the neighborhood ra-
dius from the z-coordinate of the neigh-
borhood center point;
if ’used_radius_type’=’sampling’, then
the sub-sampling factor is reported,
which is used to generate the triangu-
lation of 3D primitives, in particular:
cylinder and sphere.
’neigh_orient_tol’ <α> returns the surface curvature parameter
α in degrees that was used for the trian-
gulation.
’neigh_latitude_tol’ <β> returns the angular tolerance window in
degrees that was used to select surface
neighbors.
’neigh_vertical_tol’ <d> returns the neighborhood parameter d as
a factor of the used radius.
’fix_flips’ ’true’ | ’false’ returns whether the flip fixing was en-
abled.
’hole_filling’ ’false’ | <max hole boundary returns ’false’ when the hole filling was
length> disabled, or the specified maximal hole
boundary length in number of points.
’timeout’ ’false’ | <timeout> ’false’ when the timeout was disabled,
or the specified timeout in seconds.
’timeout_occured’ ’yes’ | ’no’ returns whether a timeout occurred.

List of values: ’num_triangles’, ’verbose’.


Default: ’num_triangles’.

Implicit triangulation
By selecting Method=’implicit’ an implicit triangulation algorithm based on a Poisson solver (see the paper in
References) is invoked. It constructs a water-tight surface, i.e., it is completely closed. The implicit triangulation
requires 3D point data containing normals. Additionally, it is required that the 3D normals are pointing strictly
inwards or strictly outwards regarding the volume enclosed by the surface to be reconstructed. Unlike the ’greedy’
algorithm, the ’implicit’ algorithm does not construct the surface through the input 3D points. Instead, it constructs

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 245

a surface that approximates the original 3D data and creates a new set of 3D points lying on this surface.
First, the algorithm organizes the point data in an adaptive octree structure: the volume of the bounding box
containing the point data is split in the middle in each dimension resulting in eight sub-volumes, or octree voxels.
Voxels still containing enough point data can be split in further eight sub-voxels. Voxels that contain no or just
few points must not be split further. This splitting is repeated recursively in regions of dense 3D point data until
the resulting voxels contain no or just few points. The recursion level of the voxel splits, reached with the smallest
voxels, is denoted as depth of the octree.
In the next step, the algorithm estimates the values of the so-called implicit indicator function of the surface, based
on the assumption that the points from ObjectModel3D are lying on the surface of an object and the normals of
the points in ObjectModel3D are pointing inwards that object (see the paper in References). This assumption
explains the requirement of mutually consistent normal orientations. The implicit function has a value of 1 in
voxel corners that are strictly inside the body and 0 for voxel corners strictly outside of it. Due to noisy data, voxel
corners that are close to the boundary of the object cannot be ’labeled’ unambiguously. Therefore, they receive a
value between 0 and 1.
The implicit surface defined by the indicator function is a surface, such that each point lying on it has an indicator
value of 0.5. The implicit algorithm uses a standard marching cubes algorithm to compute the intersection points
of the implicit surface with the sides of the octree voxels. The intersection points result in the new set of 3D
points spanning the surface returned in TriangulatedObjectModel3D. As a consequence, the resolution of
the surface details reconstructed in TriangulatedObjectModel3D depends directly on the resolution of the
octree (i.e., on its depth).
By setting GenParamName to one of the following values, additional parameters specific for the implicit trian-
gulation can be set with GenParamValue:

’implicit_octree_depth’: sets the depth of the octree. The octree depth controls the resolution of the surface gen-
eration - a higher depth leads to a higher surface resolution. The octree depth has an exponential effect on the
runtime and an exponential effect on the memory requirements of the octree. Therefore, the depth is limited
to 12.
Restriction: 5 ≤ ’implicit_octree_depth’ ≤ 12.
Suggested values: 5, 6, 8, 10, 11, 12.
Default: 6.
’implicit_solver_depth’: enables an alternative algorithm, which can prepare the implicit function up to a user
specified octree depth, before the original algorithm takes over the rest of the computations. This algorithm
requires less memory than the original one, but is a bit slower.
Restriction: ’implicit_solver_depth’ ≤ ’implicit_octree_depth’.
Suggested values: 2, 4, 6, 8, 10, 11, 12.
Default: 6.
’implicit_min_num_samples’: sets the minimal number of point samples required per octree voxel node. If the
number of points in a voxel is less than this value, the voxel is not split any further. For noise free data, this
value can be set low (e.g., between 1-5). For noisy data, this value should be set higher (e.g., 10-20), such
that the noisy data is accumulated in single voxel nodes to smooth the noise.
Suggested values: 1 5, 10, 15, 20, 30.
Default: 1.
’information’: specifies, which intermediate results shall be reported in Information. By default (’informa-
tion’=’num_triangles’), the number of generated triangles is reported. For ’information’=’verbose’, a list of
name-value information pairs is returned. Currently, the following information is reported:
Name Value Description
’num_triangles’ <number of triangles> returns the number of generated triangular faces.
’num_points’ <number of points> returns the number of generated points.
List of values: ’num_triangles’, ’verbose’.
Default: ’num_triangles’.

Comparison of the triangulation methods


In this paragraph, a simple comparison of both supported triangulation methods is provided:

HALCON 24.11.1.0
246 CHAPTER 4 3D OBJECT MODEL

Property Greedy triangulation Implicit triangulation


required data: 3D points with 3D normals 3D points with 3D normals, the normals
must point consistently inwards
resulting surface: open, triangulation of the input points closed (water-tight), approximation of the
input points
resulting point data: the input point data is preserved new point data is generated
noise handling: moderate point and normal noise handled point and normal noise handled implic-
properly itly; moderate and high noise levels are
accepted
triangulation resolu- explicit, controlled by surface neighbor- implicit, controlled by octree depth and
tion: hood parameters minimal number of point samples per
node
time complexity: O(N k log N ) O(N D3 )
memory complexity: O(N k), with neighborhood prefetching O(D3 )
O(N ), without neighborhood prefetching

where:
N : number of points
k: size of the neighborhood
D: depth of the octree

Depending on the number of points in ObjectModel3D, noise, and specific structure of the data, both algorithms
deliver different results and perform with different time and memory complexity. The greedy algorithm works fast,
requires less memory, and returns a high level of details in the reconstructed surface for rather small data sets
(up to, e.g., 500.000 points). Since the algorithm must basically process every single point in the data, its time
performance cannot be decoupled from the point number and it can be rather time consuming for more than 500.000
points. If large point sets need to be triangulated with this method anyway, it is recommended to first sub-sample
them via sample_object_model_3d.
In contrast, as described above, the implicit algorithm organizes all points in an underlying octree. Therefore, the
details returned by it, its speed, and its memory consumption are dominated by the depth of the octree. While
higher levels of surface details can only be achieved at disproportionately higher time and memory costs, the
octree offers the advantage that it handles large point sets more efficiently. With the octree, the performance of the
implicit algorithm depends mostly on the depth of the octree and to a lesser degree on the number of points to be
processed. One further disadvantage of the implicit algorithm is its requirement that the adjacent point normals are
strictly consistent. This requirement can seldom be fulfilled by usual normal estimation routines.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model containing 3D point data.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Triangulation method.
Default: ’greedy’
List of values: Method ∈ {’greedy’, ’implicit’, ’polygon_triangulation’, ’xyz_mapping’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic triangulation parameters.
Default: []
List of values: GenParamName ∈ {’information’, ’implicit_octree_depth’, ’implicit_solver_depth’,
’implicit_min_num_samples’, ’greedy_radius_type’, ’greedy_radius_value’, ’greedy_kNN’,
’greedy_neigh_orient_tol’, ’greedy_neigh_orient_consistent’, ’greedy_neigh_vertical_tol’,
’greedy_neigh_latitude_tol’, ’greedy_hole_filling’, ’greedy_fix_flips’, ’greedy_mesh_erosion’,
’greedy_mesh_dilation’, ’greedy_remove_small_surfaces’, ’greedy_prefetch_neighbors’, ’greedy_timeout’,
’greedy_suppress_timeout_error’, ’greedy_output_all_points’, ’xyz_mapping_max_area_holes’,
’xyz_mapping_output_all_points’, ’xyz_mapping_max_view_angle’, ’xyz_mapping_max_view_dir_x’,
’xyz_mapping_max_view_dir_y’, ’xyz_mapping_max_view_dir_z’}

HALCON/HDevelop Reference Manual, 2024-11-13


4.4. TRANSFORMATIONS 247

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string


Values of the generic triangulation parameters.
Default: []
Suggested values: GenParamValue ∈ {6, 8, 12, ’true’, ’false’, ’auto’, ’fixed’, ’z_factor’, ’verbose’,
’num_triangles’}
. TriangulatedObjectModel3D (output_control) . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model with the triangulated surface.
. Information (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / string
Additional information about the triangulation process.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
read_object_model_3d, gen_plane_object_model_3d, gen_sphere_object_model_3d,
gen_cylinder_object_model_3d, gen_box_object_model_3d,
gen_sphere_object_model_3d_center, sample_object_model_3d
Possible Successors
write_object_model_3d, render_object_model_3d, project_object_model_3d,
simplify_object_model_3d
References
M. Kazhdan, M. Bolitho, and H. Hoppe: “Poisson Surface Reconstruction.” Symposium on Geometry Processing
(June 2006).
Module
3D Metrology

xyz_to_object_model_3d ( X, Y, Z : : : ObjectModel3D )

Transform 3D points from images to a 3D object model.


The operator xyz_to_object_model_3d transforms an image triple that contains the X, Y, and Z-coordinates
of 3D points to a 3D object model. Thereby, only points in the intersecting domains of all three images are
used and the images need to be of same size. The size of these images can be queried from the model by
get_object_model_3d_params with ’mapping_size’. The handle of the created 3D object model is re-
turned in ObjectModel3D. The created 3D object model contains the coordinates of the points, as well as a
mapping attribute that contains the original row and column of each 3D point. Points where one of the coordinates
is infinity or "Not a Number" (NaN) are ignored and not added to the 3D object model.
Parameters

. X (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real


Image with the X-Coordinates and the ROI of the 3D points.
. Y (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the Y-Coordinates of the 3D points.
. Z (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the Z-Coordinates of the 3D points.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
Result
The operator xyz_to_object_model_3d returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information

HALCON 24.11.1.0
248 CHAPTER 4 3D OBJECT MODEL

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
disparity_image_to_xyz, get_sheet_of_light_result
Alternatives
gen_object_model_3d_from_points, get_sheet_of_light_result_object_model_3d
See also
read_object_model_3d
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 5

3D Reconstruction

5.1 Binocular Stereo

binocular_disparity ( ImageRect1, ImageRect2 : Disparity,


Score : Method, MaskWidth, MaskHeight, TextureThresh,
MinDisparity, MaxDisparity, NumLevels, ScoreThresh, Filter,
SubDisparity : )

Compute the disparities of a rectified image pair using correlation techniques.


binocular_disparity computes pixel-wise correspondences between two rectified images using correlation
techniques. Different from binocular_distance the results are not transformed into distance values.
The algorithm requires a reference image ImageRect1 and a search image ImageRect2 which must be
rectified, i.e., corresponding epipolar lines are parallel and lie on identical image rows ( r1 = r2 ). In
case this assumption is violated the images can be rectified by using the operators calibrate_cameras,
gen_binocular_rectification_map, and map_image. Hence, given a pixel in the reference image
ImageRect1 the homologous pixel in ImageRect2 is selected by searching along the corresponding row
in ImageRect2 and matching a local neighborhood within a rectangular window of size MaskWidth and
MaskHeight. The pixel correspondences are returned in the single-channel Disparity image d(r1 , c1 )
which specifies for each pixel (r1, c1) of the reference image ImageRect1 a suitable matching pixel (r2, c2)
of ImageRect2 according to the equation c2 = c1 + d(r1 , c1 ). A quality measure for each disparity value is
returned in Score, containing the best result of the matching function S of a reference pixel. For the matching,
the gray values of the original unprocessed images are used.
The used matching function is defined by the parameter Method allocating three different kinds of correlation:

• ’sad’: Summed Absolute Differences


r+m c+n
S(r, c, d) = N1 | g1 (r0 , c0 ) − g2 (r0 , c0 + d) |,
P P
r 0 =r−m c0 =c−n

with 0 ≤ S(r, c, d) ≤ 255.


• ’ssd’: Summed Squared Differences
r+m c+n
S(r, c, d) = N1 (g1 (r0 , c0 ) − g2 (r0 , c0 + d))2 ,
P P
r 0 =r−m c0 =c−n

with 0 ≤ S(r, c, d) ≤ 65025.


• ’ncc’: Normalized Cross Correlation
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))(g2 (r 0 ,c0 +d)−g¯2 (r,c+d))
r 0 =r−m c0 =c−n
S(r, c, d) = s ,
r+m
P c+n
P  r+m
P c+n
P 
(g1 (r 0 ,c0 )−g¯1 (r,c))2 (g2 (r 0 ,c0 +d)−g¯2 (r,c+d))2
r 0 =r−m c0 =c−n r 0 =r−m c0 =c−n

with −1.0 ≤ S(r, c, d) ≤ 1.0.

249
250 CHAPTER 5 3D RECONSTRUCTION

with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
Note that the methods ’sad’ and ’ssd’ compare the gray values of the pixels within a mask window directly, whereas
’ncc’ compensates for the mean gray value and its variance within the mask window. Therefore, if the two images
differ in brightness and contrast, this method should be preferred. For images with similar brightness and contrast
’sad’ and ’ssd’ are to be preferred as they are faster because of less complex internal computations.
It should be noted, that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window, referenced by 2m + 1 and 2n + 1, has to be odd numbered and is passed in
MaskWidth and MaskHeight. The search space is confined by the minimum and maximum disparity value
MinDisparity and MaxDisparity. Due to pixel values not defined beyond the image border the resulting
domain of Disparity and Score is not set along the image border within a margin of height (MaskHeight-
1)/2 at the top and bottom border and of width (MaskWidth-1)/2 at the left and right border. For the same reason,
the maximum disparity range is reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum statistical
spread of gray values within the correlation window can be defined in TextureThresh. This threshold is applied
on both input images ImageRect1 and ImageRect2. In addition, ScoreThresh guarantees the matching
quality and defines the maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation
function. Setting Filter to ’left_right_check’, moreover, increases the robustness of the returned matches, as the
result relies on a concurrent direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_disparity is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages of similar disparity to reduce the disparity range on the next lower pyramid level.
TextureThresh and ScoreThresh are applied on every level and the returned domain of the Disparity
and Score images arises from the intersection of the resulting domains of every single level. Generally, pyramid
structures are the more advantageous the more the disparity image can be segmented into regions of homogeneous
disparities and the bigger the disparity range is specified. As a drawback, coarse pyramid levels might loose
important texture information which can result in deficient disparity values.
Finally, the value ’interpolation’ for parameter SubDisparity performs subpixel refinement of disparities. It is
switched off by setting the parameter to ’none’.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 2.
. Disparity (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Disparity map.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Evaluation of the disparity values.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Matching function.
Default: ’ncc’
List of values: Method ∈ {’sad’, ’ssd’, ’ncc’}
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the correlation window.
Default: 11
Suggested values: MaskWidth ∈ {5, 7, 9, 11, 21}
Restriction: 3 <= MaskWidth && odd(MaskWidth)

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 251

. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer


Height of the correlation window.
Default: 11
Suggested values: MaskHeight ∈ {5, 7, 9, 11, 21}
Restriction: 3 <= MaskHeight && odd(MaskHeight)
. TextureThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Variance threshold of textured image regions.
Default: 0.0
Suggested values: TextureThresh ∈ {0.0, 10.0, 30.0}
Restriction: 0.0 <= TextureThresh
. MinDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum of the expected disparities.
Default: -30
Value range: -32768 ≤ MinDisparity ≤ 32767
. MaxDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum of the expected disparities.
Default: 30
Value range: -32768 ≤ MaxDisparity ≤ 32767
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of pyramid levels.
Default: 1
Suggested values: NumLevels ∈ {1, 2, 3, 4}
Restriction: 1 <= NumLevels
. ScoreThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Threshold of the correlation function.
Default: 0.5
Suggested values: ScoreThresh ∈ {-1.0, 0.0, 0.3, 0.5, 0.7}
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Downstream filters.
Default: ’none’
List of values: Filter ∈ {’none’, ’left_right_check’}
. SubDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Subpixel interpolation of disparities.
Default: ’none’
List of values: SubDisparity ∈ {’none’, ’interpolation’}
Example

* Set internal and external stereo parameters.


* Note that, typically, these values are the result of a prior
* calibration.
gen_cam_par_area_scan_division (0.01, -665, 5.2e-006, 5.2e-006, \
622, 517, 1280, 1024, CamParam1)
gen_cam_par_area_scan_division (0.01, -731, 5.2e-006, 5.2e-006, \
654, 519, 1280, 1024, CamParam2)
create_pose (0.1535,-0.0037,0.0447,0.17,319.84,359.89, \
'Rp+T', 'gba', 'point', RelPose)

* Compute the mapping for rectified images.


gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose, \
1, 'viewing_direction', 'bilinear', \
CamParamRect1,CamParamRect2, \
Cam1PoseRect1, Cam2PoseRect2,RelPoseRect)

* Compute the disparities in online images.


while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageRect1)

HALCON 24.11.1.0
252 CHAPTER 5 3D RECONSTRUCTION

grab_image_async (Image2, AcqHandle2, -1)


map_image (Image2, Map2, ImageRect2)

binocular_disparity(ImageRect1, ImageRect2, Disparity, Score, 'sad', \


11, 11, 20, -40, 20, 2, 25, 'left_right_check', \
'interpolation')
endwhile

Result
binocular_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
map_image
Possible Successors
threshold, disparity_to_distance, disparity_image_to_xyz
Alternatives
binocular_disparity_mg, binocular_disparity_ms, binocular_distance,
binocular_distance_mg, binocular_distance_ms
See also
map_image, gen_binocular_rectification_map, binocular_calibration
Module
3D Metrology

binocular_disparity_mg ( ImageRect1, ImageRect2 : Disparity,


Score : GrayConstancy, GradientConstancy, Smoothness,
InitialGuess, CalculateScore, MGParamName, MGParamValue : )

Compute the disparities of a rectified stereo image pair using multigrid methods.
binocular_disparity_mg calculates the disparity between two rectified stereo images ImageRect1 and
ImageRect2 and returns it in Disparity. In contrast to binocular_disparity, a variational approach
based on multigrid methods is used. This approach returns disparity values also for image parts that contain no
texture. In contrast to binocular_distance_mg, the results are not transformed into distance values.
The input images must be a pair of rectified stereo images, i.e., corresponding points must have the same vertical
coordinate. The images can have different widths, but must have the same height. The runtime of the operator is
approximately linear in the size of the images.
The disparity is the amount by which each point in the first image ImageRect1 needs to be moved to reach its
corresponding point in the second image ImageRect2. Two points are called corresponding if they are the image
of the same point in the original scene. The calculated disparity field is dense and estimates the disparity also for
points that do not have a corresponding point. The disparity is calculated only for those lines that are part of the
domains of both input images. More exactly, the domain of the disparity map is calculated as the intersection of
heights of the smallest enclosing rectangles of the domains of the input images.
The calculated disparity field is usually not perfect. If the parameter CalculateScore is set to ’true’, a quality
measure for the disparity is estimated for each pixel and returned in Score, which is a gray value image with a
range from 0 to 10, where 0 is the best quality and 10 the worst. For this, the reverse disparity field from the second
to the first image is calculated and compared to the returned disparity field. Because of this, the runtime roughly
doubles when computing the score.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 253

The operator uses a variational approach, where an energy value is assigned to each possible disparity field. Dis-
parity fields with a lower energy are better than those with a high energy. The operator calculates a disparity field
with the minimum energy and returns it.
The energy assigned to a disparity field consists of a data term and a smoothness term. The data term models
the fact that corresponding points are images of the same part of the scene and thus have equal gray values. The
smoothness term models the fact that the imaged scene and with it its disparity field is piecewise smooth, which
leads to an interpolation of the disparity into areas with low information from the data term, e.g., areas with no
texture.
The details of the assumptions are as follows:
Constancy of the gray values: It is assumed that corresponding points have the same gray value, i.e., that I1 (x, y) =
I2 (x + u(x, y), y).
Constancy of the gray value gradients: It is assumed that corresponding points have the same gray value gradient,
i.e., that ∇I1 (x, y) = ∇I2 (x + u(x, y), y)). Discrepancies from this assumption are modeled using the L2 norm
of the difference of the two gradients. The gray value gradient has the advantage of being invariant to additive
illumination changes between the two images.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate
√ the constancy
assumptions, they are penalized in a statistically robust manner via the total variation Ψ(x) = x + 2 , where
 = 0.01 is a fixed regularization constant.
Smoothness of the disparity field: It is assumed that the resulting disparity field is piecewise smooth. This is
modeled by the L2 norm of the derivative of the disparity field.
Statistical robustness in the smoothness term: Analogously to the data term, the statistically robust total variation
is applied to the smoothness term to reduce the influence of outliers. This is especially important for preserving
edges in the disparity field that appear on object boundaries.
The energy functional is the integral of a linear combination of the above terms over the area of the first image. The
coefficients of the linear combination are parameters of the operator and allow a fine tuning of the model to a spe-
cific situation. GrayConstancy determines the influence of the gray value constancy, GradientConstancy
the influence of the constancy of the gray value gradient, and Smoothness the influence of the smoothness term.
The first two parameters need to be adapted to the gray value interval of the images. The proposed parameters are
valid for images with a gray value range of 0 to 255.
Let I1 (x, y) be the gray value of the first image at the coordinates (x, y), I2 (x, y) the gray value of the second
image, and u(x, y) the value of the disparity at the coordinate (x, y). Then, the energy functional is then given by

Z
E= Ψ GrayConstancy · (I2 (x + u(x, y), y) − I1 (x, y))2 +
| {z }
gray value constancy
GradientConstancy · |∇I2 (x + u(x, y)) − ∇I1 (x, y)|2

| {z }
gradient constancy
+ Smoothness · Ψ |∇u(x, y)|2 dx dy

| {z }
smoothness
It is assumed that the disparity field u that minimizes the functional E satisfies the above assumptions and is thus
a good approximation of the disparity between the two images.
The above functional is minimized by finding the roots of the Euler-Lagrange equation (ELE) of the integral. This
is comparable to finding the extremal values of a one-dimensional function by searching the roots of its derivative.
The ELE is a nonlinear partial differential equation over the region of the integral, which needs to be 0 for extrema
of E. Since the functional typically does not have any maxima, the corresponding roots of the ELE correspond to
the minima of the functional.
The following techniques are used to find the roots of the ELE:
Fixed point iteration: The ELE is solved by converting it to a fixed point iteration that iteratively approaches the
solution. The number of iterations can be used to balance between speed and accuracy of the solution. Each step
of the fixed point iteration consists of solving a linear partial differential equation.
Coarse-to-fine process: A Gaussian image pyramid of the stereo images is created. The ELE is first solved on a
coarse level of the pyramid and the solution is taken as the initial value of the fixed point iteration of the next level.
This has a number of advantages and disadvantages:

HALCON 24.11.1.0
254 CHAPTER 5 3D RECONSTRUCTION

1. Since the fixed point iteration of the next level receives a good initial value, fewer iterations are necessary to
archive a good accuracy. The iteration must perform only small corrections of the disparity.
2. Large disparities on the original images become small disparities on the coarse grid levels and can thus be
calculated more easily.
3. The robustness against noise in the images is increased because most kinds of noise disappear on the coarse
version of the images.
4. Problems arise with small structures that have a large disparity difference to their surroundings since they
disappear on coarse versions of the image and thus the disparity of the surroundings is calculated. This error will
not be corrected on the finer levels of the image pyramid since only small corrections are calculated there.
Multigrid methods: The linear partial differential equations that arise in the fixed point iteration at each pyramid
level are converted into a linear system of equations through linearization. These linear systems are solved
using iterative solvers. Multigrid methods are among the most efficient solvers for the kind of linear systems
that arise here. They use the fact that classic iterative solvers, like the Gauss-Seidel solver, quickly reduce
the high frequency parts of the error, but only slowly reduce the low frequency parts. Multigrid methods thus
calculate the error on a coarser grid where the low frequency part of the error appears as high frequencies
and can be reduced quickly by the classical solvers. This is done hierarchically, i.e., the computation of
the error on a coarser resolution level itself uses the same strategy and efficiently computes its error (i.e.,
the error of the error) by correction steps on an even coarser resolution level. Depending on whether one
or two error correction steps are performed per cycle, a so called V or W cycle is obtained. The corre-
sponding strategies for stepping through the resolution hierarchy are as follows for two to four resolution levels:

Bidirectional multigrid algorithm

Fine
V-Cycles W-Cycles
1 u u u u u u u u u u u u
AAs A s s As s
 AAs A s s s As s s

2 A  A A A  A
As As s As As As s s As s s
 A 
3
AAs AAsAAs AsAAs
4

Coarse

Here, iterations on the original problem are denoted by large markers, while small markers denote iterations on
error correction problems.
Algorithmically, a correction cycle can be described as follows:
1. In the first step, several (few) iterations using an interactive linear or nonlinear basic solver are performed (e.g.,
a variant of the Gauss-Seidel solver). This step is called pre-relaxation step.
2. In the second step, the current error is computed to correct the current solution (the solution after step 1).
For efficiency reasons, the error is calculated on a coarser resolution level. This step, which can be performed
iteratively several times, is called coarse grid correction step.
3. In a final step, again several (few) iterations using the interactive linear or nonlinear basic solver of step 1 are
performed. This step is called post-relaxation step.
In addition, the solution can be initialized in a hierarchical manner. Starting from a very coarse variant of the
original linear equation system, the solution is successively refined. To do so, interpolated solutions of coarser
variants of the equation system are used as the initialization of the next finer variant. On each resolution level
itself, the V or W cycles described above are used to efficiently solve the linear equation system on that resolution
level. The corresponding multigrid methods are called full multigrid methods in the literature. The full multigrid
algorithm can be visualized as follows:

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 255

Full multigrid algorithm

Fine 4→3 3→2 2→1


i w1 w 2 i w1 w2 i w1 w2
1 u u u
2 u u u s
A
 s s s
A s s

A  A
u u u s s s s s s As s s As s s As s s As s s
A
 A  A A 
3
 AsAAs AAsAAs
u u uA
A A
As As AAsAAs AsAA
s sAA
AA s sAA
s
4 A

Coarse

This example represents a full multigrid algorithm that uses two W correction cycles per resolution level of the
hierarchical initialization. The interpolation steps of the solution from one resolution level to the next are denoted
by i and the two W correction cycles by w1 and w2 . Iterations on the original problem are denoted by large
markers, while small markers denote iterations on error correction problems.
Depending on the selected multigrid solver, a number of parameters for fine tuning the solver are available and are
described in the following.
The parameter InitialGuess gives a initial value for the initialization of the fixed point iteration on the coarsest
grid. Usually 0 is sufficient, but to avoid local minima other values can be used.
Using the parameters MGParamName and MGParamValue, the solver is controlled, i.e., the coarse-to-fine pro-
cess, the fixed point iteration, and the multigrid solver. It is usually sufficient to use one of the predefined pa-
rameter sets, which are available by setting MGParamName = ’default_parameters’ and MGParamValue =
’very_accurate’, ’accurate’, ’fast_accurate’, or ’fast’.
If the parameters should be specified individually, MGParamName and MGParamValue must be set to tuples of
the same length. The values corresponding to the parameters specified in MGParamName must be specified at
the corresponding position in MGParamValue. The parameters are evaluated in the given order. Therefore, it is
possible to first select a group of default parameters (see above) and then change only some of the parameters. in
the following, the possible parameters are described.
MGParamName = ’mg_solver’ sets the solver for the linear system. Possible values for MGParamValue are
’multigrid’ for a simple multigrid solver, ’full_multigrid’ for a full multigrid solver, and ’gauss_seidel’ for the plain
Gauss-Seidel solver. The multigrid methods have the advantage of a faster convergence, but incur the overhead of
coarsening the linear system.
MGParamName = ’mg_cycle_type’ selects the type of recursion for the multigrid solvers. Possible values for
MGParamValue are ’v’ for a V-Cycle, ’w’ for a W-Cycle, and ’none’ for no recursion.
MGParamName = ’mg_pre_relax’ sets the number of iterations of the pre-relaxation step in multigrid solvers, or
the number of iterations for the Gauss-Seidel solver, depending on which is selected.
MGParamName = ’mg_post_relax’ sets the number of iterations of the post-relaxation step.
Increasing the number of pre- and post-relaxation steps increases the computation time asymptotically linearly.
However, no additional restriction and prolongation operations (zooming down and up of the error correction
images) are performed. Consequently, a moderate increase in the number of relaxation steps only leads to a slight
increase in the computation times.
MGParamName = ’initial_level’ sets the coarsest level of the image pyramid where the coarse-to-fine process
starts. The value can be positive, in which case it directly gives the initial level. Level 0 is the finest level with the
original images. If the value is negative, then it is used relative to the maximum number of pyramid levels. The
coarsest available pyramid level is the one where both images have a size of at least 4 pixels in both directions. As
described below, the default value of ’initial_level’ is -2. This facilitates the calculation of the correct disparity for
images that have very large disparities. In some cases, e.g., for repeating textures, this may lead to the fact that too
large disparities are calculated for some parts of the image. In this case, ’initial_level’ should be set to a smaller
value.
The standard parameters zoom the image with a factor of 0.6 per pyramid level. If a guess of the maximum
disparity d exists, then the initial level s should be selected so that 0.6−s is greater than d.
MGParamName = ’iterations’ sets the number of iterations of the fixed point iteration per pyramid level. The
exact number of iterations is steps = min(10, iterations + level2 ), where level is the current level in the image
pyramid. If this value is set to 0, then no iteration is performed on the finest pyramid level 0. Instead, the result of

HALCON 24.11.1.0
256 CHAPTER 5 3D RECONSTRUCTION

level 1 is scaled to the original image size and returned, which can be used if speed is crucial. The runtime of the
operator is approximately linear in the number of iterations.
MGParamName = ’pyramid_factor’ determines the factor by which the images are scaled when creating the image
pyramid for the coarse-to-fine process. The width and height of the next smaller image is scaled by the given factor.
The value must lie between 0.1 and 0.9.
The predefined parameter sets for MGParamName = ’default_parameters’ contain the following values:
’default_parameters’ = ’very_accurate’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_pre_relax’
= 5, ’mg_post_relax’ = 5, ’initial_level’ = -2, ’iterations’ = 5, ’pyramid_factor’ = 0.6.
’default_parameters’ = ’accurate’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_pre_relax’ = 5,
’mg_post_relax’ = 5, ’initial_level’ = -2, ’iterations’ = 2, ’pyramid_factor’ = 0.6.
’default_parameters’ = ’fast_accurate’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’v’, ’mg_pre_relax’
= 2, ’mg_post_relax’ = 2, ’initial_level’ = -2, ’iterations’ = 1, ’pyramid_factor’ = 0.6. These are the default
parameters of the algorithm if the default parameter set is not specified.
’default_parameters’ = ’fast’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’v’, ’mg_pre_relax’ = 1,
’mg_post_relax’ = 1, ’initial_level’ = -2, ’iterations’ = 0, ’pyramid_factor’ = 0.6.
Weaknesses of the operator: Large jumps in the disparity, which correspond to large jumps in the distance of the
observed objects, are smoothed rather strongly. This leads to problems with thin objects that have a large distance
to their background.
Distortions can occur at the left and right border of the image in the parts that are visible in only one of the images.
Additionally, general problems of stereo vision should be avoided, including horizontally repetitive patterns, areas
with little texture as well as reflections.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 2.
. Disparity (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Disparity map.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Score of the calculated disparity if CalculateScore is set to ’true’.
. GrayConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gray value constancy in the data term.
Default: 1.0
Suggested values: GrayConstancy ∈ {0.0, 1.0, 2.0, 10.0}
Restriction: GrayConstancy >= 0.0
. GradientConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gradient constancy in the data term.
Default: 30.0
Suggested values: GradientConstancy ∈ {0.0, 1.0, 5.0, 10.0, 30.0, 50.0, 70.0}
Restriction: GradientConstancy >= 0.0
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the smoothness term in relation to the data term.
Default: 5.0
Suggested values: Smoothness ∈ {1.0, 3.0, 5.0, 10.0}
Restriction: Smoothness > 0.0
. InitialGuess (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Initial guess of the disparity.
Default: 0.0
Suggested values: InitialGuess ∈ {-30.0, -20.0, -10.0, 0.0, 10.0, 20.0, 30.0}
. CalculateScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should the quality measure should be returned in Score?
Default: ’false’
Suggested values: CalculateScore ∈ {’true’, ’false’}

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 257

. MGParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string


Parameter name(s) for the multigrid algorithm.
Default: ’default_parameters’
List of values: MGParamName ∈ {’default_parameters’, ’mg_solver’, ’mg_cycle_type’, ’mg_pre_relax’,
’mg_post_relax’, ’initial_level’, ’pyramid_factor’, ’iterations’}
. MGParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Parameter value(s) for the multigrid algorithm.
Default: ’fast_accurate’
Suggested values: MGParamValue ∈ {’very_accurate’, ’accurate’, ’fast_accurate’, ’fast’, ’v’, ’w’, ’none’,
’gauss_seidel’, ’multigrid’, ’full_multigrid’, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 0.1, 0.2, 0.3,
0.4, 0.5, 0.6, 0.7, 0.8, 0.9, -1, -2, -3, -4, -5}
Example

read_image (BaseballL, 'stereo/epipolar/baseball_l')


read_image (BaseballR, 'stereo/epipolar/baseball_r')
binocular_disparity_mg (BaseballL, BaseballR, Disparity, Score, \
0.25, 30, 5, 0, 'true', \
'default_parameters','fast_accurate')

Result
If the parameter values are correct, binocular_disparity_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
• Automatically parallelized on internal data level.
Possible Predecessors
map_image
Possible Successors
threshold, disparity_to_distance, disparity_image_to_xyz
Alternatives
binocular_disparity, binocular_disparity_ms, binocular_distance,
binocular_distance_mg, binocular_distance_ms
See also
map_image, gen_binocular_rectification_map, binocular_calibration
Module
3D Metrology

binocular_disparity_ms ( ImageRect1, ImageRect2 : Disparity,


Score : MinDisparity, MaxDisparity, SurfaceSmoothing,
EdgeSmoothing, GenParamName, GenParamValue : )

Compute the disparities of a rectified stereo image pair using multi-scanline optimization.
binocular_disparity_ms calculates the disparity between two rectified stereo images ImageRect1 and
ImageRect2 using multi-scanline optimization. The resulting disparity image is returned in Disparity. In
contrast to binocular_distance_ms, the results are not transformed into distance values.
For this task, the three operators binocular_disparity, binocular_disparity_mg, and
binocular_disparity_ms can be used. binocular_disparity returns robust results in regions of

HALCON 24.11.1.0
258 CHAPTER 5 3D RECONSTRUCTION

sufficient texture but fails where there is none. binocular_disparity_mg interpolates low-texture regions
but blurs discontinuities. binocular_disparity_ms preserves discontinuities and interpolates partially.
binocular_disparity_ms requires a reference image ImageRect1 and a search image ImageRect2
which both must be rectified, i.e., corresponding pixels must have the same row coordinate. If this
assumption is violated, the images can be rectified by using the operators calibrate_cameras,
gen_binocular_rectification_map, and map_image.
ImageRect1 and ImageRect2 can have different widths but must have the same height. Given a pixel in
ImageRect1, the homologous pixel in ImageRect2 is selected by searching along the corresponding row in
ImageRect2 and matching both pixels based on a similarity measure. The disparity is the number of pixels by
which each pixel in ImageRect1 needs to be moved to reach the homologous pixel in ImageRect2.
The search space is confined by the minimum and maximum disparity values MinDisparity and
MaxDisparity. If the minimum and maximum disparity values are set to an empty tuple, they are automat-
ically set to the maximum possible range for the given images ImageRect1.
To calculate the disparities from the similarity measure, the intermediate results are optimized by a multi-scanline
method. The optimization increases the robustness in low-texture areas without blurring discontinuities in the
disparity image. The optimization is controlled by the parameters SurfaceSmoothing and EdgeSmoothing.
SurfaceSmoothing controls the smoothness within surfaces. High values suppress disparity differences of one
pixel. EdgeSmoothing controls the occurrence and the shape of edges. Low values allow many edges, high
values lead to fewer and rounder edges. For both parameters, reasonable values usually range between 0 and 100.
If both parameters are set to zero, no optimization is performed.
The calculation of the disparities can be controlled by generic parameters. The following generic parameters
GenParamName and the corresponding values GenParamValue are supported:

’consistency_check’ Activates an optional post-processing step to increase robustness. Concurrent direct and re-
verse matches between reference patterns in ImageRect1 and ImageRect2 are required for a disparity
value to be returned. The check is switched off by setting GenParamValue to ’false’.
List of values: ’true’, ’false’.
Default: ’true’.
’disparity_offset’ Adapts the quality of the coarse-to-fine approach at discontinuities. The higher the value set in
GenParamValue, the more runtime is required.
Suggested values: 2, 3, 4.
Default: 3.
’method’: Determines the method used to calculate the disparities. The following parameters GenParamValue
can be set:
• ’accurate’: Most accurate calculation method, but requires more runtime and memory compared to the
remaining methods.
• ’fast’: Uses a coarse-to-fine scheme to improve the runtime. The coarse-to-fine scheme works in a
similar way to the scheme explained in binocular_disparity.
The coarse-to-fine method requires significantly less memory and is significantly faster than the ’accu-
rate’ method, especially for large images or a large range of MinDisparity and MaxDisparity.
The coarse-to-fine scheme has the further advantage that it automatically estimates the range of
MinDisparity and MaxDisparity while traversing through the pyramid. As a consequence, nei-
ther MinDisparity nor MaxDisparity needs to be set. However, the generated disparity images
are less accurate for the ’fast’ method than for the default ’accurate’ approach. Especially at sharp
disparity jumps the ’fast’ method preserves discontinuities less accurately.
• ’very_fast’: Also uses a coarse-to-fine scheme to improve the runtime even further. However, this ap-
proach makes numerous assumptions that may lead to a smoothing of the disparities at discontinuities.
Per default, the number of levels of the coarse-to-fine scheme is estimated automatically. It is possible
to set the number of levels explicitly (see ’num_levels’).
The runtime of the operator is approximately linear to the image width, the image height, and the disparity
range. Consequently, the disparity range should be chosen as narrow as possible for large images. The
runtime of the coarse-to-fine scheme (which is used for ’fast’ or ’very_fast’) is approximately linear to the
image width and the image height. For small images and small disparity ranges the runtime of the coarse-to-
fine scheme may be larger than that of the ’accurate’ scheme.
List of values: ’accurate’, ’fast’, ’very_fast’.
Default: ’accurate’.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 259

’num_levels’: Determines the number of pyramid levels that are used for the coarse-to-fine scheme. By setting
GenParamValue to ’auto’, the number of pyramid levels is automatically calculated.
Suggested values: 2, 3, ’auto’.
Default: ’auto’.
’similarity_measure’: Sets the similarity measure to be used. For both options ’census_dense’ (default) and ’cen-
sus_sparse’, the similarity measure is based on the Census transform. A Census transformed image contains
for every pixel information about the intensity topology within a support window around it.
• ’census_dense’: Uses a dense 9 x 7 pixels window and is more suitable for fine structures.
• ’census_sparse’: Uses a sparse 15 x 15 pixels window where only a subset of the pixels is evaluated. Is
more robust in low-texture areas.
List of values: ’census_dense’, ’census_sparse’.
Default: ’census_dense’.
’sub_disparity’: Activates sub-pixel refinement of disparities when set to ’true’. Can be deactivated by setting
’false’.
List of values: ’true’, ’false’.
Default: ’true’.

The resulting disparity is returned in the single-channel image Disparity. A quality measure for each disparity
value is returned in Score, containing the best (lowest) result of the optimized similarity measure of a reference
pixel.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 2.
. Disparity (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Disparity map.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Score of the calculated disparity.
. MinDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum of the expected disparities.
Default: -30
Value range: -32768 ≤ MinDisparity ≤ 32768
Restriction: MinDisparity <= MaxDisparity
. MaxDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum of the expected disparities.
Default: 30
Value range: -32768 ≤ MaxDisparity ≤ 32768
Restriction: MinDisparity <= MaxDisparity
. SurfaceSmoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Smoothing of surfaces.
Default: 50
Suggested values: SurfaceSmoothing ∈ {20, 50, 100}
Restriction: SurfaceSmoothing >= 0
. EdgeSmoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Smoothing of edges.
Default: 50
Suggested values: EdgeSmoothing ∈ {20, 50, 100}
Restriction: EdgeSmoothing >= 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Parameter name(s) for the multi-scanline algorithm.
Default: []
List of values: GenParamName ∈ {’method’, ’similarity_measure’, ’consistency_check’, ’sub_disparity’,
’num_levels’, ’disparity_offset’}

HALCON 24.11.1.0
260 CHAPTER 5 3D RECONSTRUCTION

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string


Parameter value(s) for the multi-scanline algorithm.
Default: []
Suggested values: GenParamValue ∈ {’accurate’, ’fast’, ’very_fast’, ’census_dense’, ’census_sparse’,
’true’, ’false’, ’auto’}
Example

read_image (BaseballL, 'stereo/epipolar/baseball_l')


read_image (BaseballR, 'stereo/epipolar/baseball_r')
binocular_disparity_ms (BaseballL, BaseballR, Disparity, Score, \
-40, -10, 50, 50, [], [])

Result
If the parameter values are correct, binocular_disparity_ms returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Execution Information

• Supports OpenCL compute devices.


• Multithreading type: reentrant (runs in parallel with non-exclusive operators).
• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
• Automatically parallelized on internal data level.
Possible Predecessors
map_image
Possible Successors
threshold, disparity_to_distance, disparity_image_to_xyz
Alternatives
binocular_disparity, binocular_disparity_mg, binocular_distance,
binocular_distance_mg, binocular_distance_ms
See also
map_image, gen_binocular_rectification_map, binocular_calibration
Module
3D Metrology

binocular_distance ( ImageRect1, ImageRect2 : Distance,


Score : CamParamRect1, CamParamRect2, RelPoseRect, Method,
MaskWidth, MaskHeight, TextureThresh, MinDisparity, MaxDisparity,
NumLevels, ScoreThresh, Filter, SubDistance : )

Compute the distance values for a rectified stereo image pair using correlation techniques.
binocular_distance computes the distance values for a rectified stereo image pair using correlation tech-
niques. The operator first calculates the disparities between the two images ImageRect1 and ImageRect2
similar to binocular_disparity. The resulting disparities are transformed into distance values of the cor-
responding 3D world points to the rectified stereo camera system as in disparity_to_distance. The dis-
tances are returned in the single-channel image Distance in which each gray value represents the distance of the
respective 3D world point to the stereo camera system.
The algorithm requires a reference image ImageRect1 and a search image ImageRect2 which must be
rectified, i.e., corresponding epipolar lines are parallel and lie on identical image rows ( r1 = r2 ). In
case this assumption is violated the images can be rectified by using the operators calibrate_cameras,
gen_binocular_rectification_map and map_image. Hence, given a pixel in the reference image
ImageRect1 the homologous pixel in ImageRect2 is selected by searching along the corresponding row

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 261

in ImageRect2 and matching a local neighborhood within a rectangular window of size MaskWidth and
MaskHeight. For each defined reference pixel the pixel correspondences are transformed into distances of
the world points defined by the intersection of the lines of sight of a corresponding pixel pair to the z = 0 plane of
the rectified stereo system.
For this transformation the rectified internal camera parameters CamParamRect1 of camera 1 and
CamParamRect2 of camera 2, and the pose with the external parameters RelPoseRect have to be
defined. Latter one is of the form ccsR1 PccsR2 and characterizes the relative pose of both cameras
to each other. More precisely, it specifies the point transformation from the rectified camera system 2
(ccsR2) into the rectified camera system 1 (ccsR1), see Transformations / Poses and “Solution Guide
III-C - 3D Vision”. These parameters can be obtained from the operator calibrate_cameras and
gen_binocular_rectification_map. After all, a quality measure for each distance value is returned in
Score, containing the best result of the matching function S of a reference pixel. For the matching, the gray
values of the original unprocessed images are used.

• ’sad’: Summed Absolute Differences


r+m c+n
S(r, c, d) = N1 | g1 (r0 , c0 ) − g2 (r0 , c0 + d) |,
P P
r 0 =r−m c0 =c−n

with 0 ≤ S(r, c, d) ≤ 255.


• ’ssd’: Summed Squared Differences
r+m c+n
S(r, c, d) = N1 (g1 (r0 , c0 ) − g2 (r0 , c0 + d))2 ,
P P
r 0 =r−m c0 =c−n

with 0 ≤ S(r, c, d) ≤ 65025.


• ’ncc’: Normalized Cross Correlation
r+m
P c+n
P
(g1 (r 0 ,c0 )−g¯1 (r,c))(g2 (r 0 ,c0 +d)−g¯2 (r,c+d))
r 0 =r−m c0 =c−n
S(r, c, d) = s ,
r+m
P c+n
P  r+m
P c+n
P 
(g1 (r 0 ,c0 )−g¯1 (r,c))2 (g2 (r 0 ,c0 +d)−g¯2 (r,c+d))2
r 0 =r−m c0 =c−n r 0 =r−m c0 =c−n

with −1.0 ≤ S(r, c, d) ≤ 1.0.

with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
Note that the methods ’sad’ and ’ssd’ compare the gray values of the pixels within a mask window directly,
whereas ’ncc’ compensates for the mean gray value and its variance within the mask window. Therefore, if the
two images differ in brightness and contrast, this method should be preferred. For images with similar brightness
and contrast ’sad’ and ’ssd’ are to be preferred as they are faster because of less complex internal computations.
See binocular_disparity for further details.
It should be noted that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window (2m + 1 and 2n + 1) has to be odd numbered and is passed in MaskWidth and
MaskHeight. The search space is confined by the minimum and maximum disparity value MinDisparity and
MaxDisparity. Due to pixel values not defined beyond the image border the resulting domain of Distance
and Score is generally not set along the image border within a margin of height MaskHeight/2 at the top
and bottom border and of width MaskWidth/2 at the left and right border. For the same reason, the maximum
disparity range is reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum variance
within the correlation window can be defined in TextureThresh. This threshold is applied on both input images
ImageRect1 and ImageRect2. In addition, ScoreThresh guarantees the matching quality and defines the
maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting Filter
to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a concurrent
direct and reverse match, whereas ’none’ switches it off.

HALCON 24.11.1.0
262 CHAPTER 5 3D RECONSTRUCTION

The number of pyramid levels used to improve the time response of binocular_distance is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages to reduce the disparity range on the next lower pyramid level. TextureThresh and
ScoreThresh are applied on every level and the returned domain of the Distance and Score images arises
from the intersection of the resulting domains of every single level. Generally, pyramid structures are the more
advantageous the more the distance image can be segmented into regions of homogeneous distance values and the
bigger the disparity range must be specified. As a drawback, coarse pyramid levels might loose important texture
information which can result in deficient distance values.
Finally, the value ’interpolation’ for parameter SubDistance increases the refinement and accuracy of the dis-
tance values. It is switched off by setting the parameter to ’none’.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, binocular_distance does not support a stereo setup of mixed type. For stereo setups of mixed
type, please use reconstruct_surface_stereo, in which the reference coordinate system can be chosen
arbitrarily. Alternatively, binocular_disparity and disparity_image_to_xyz might be used.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 2.
. Distance (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Distance image.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Evaluation of a distance value.
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Matching function.
Default: ’ncc’
List of values: Method ∈ {’sad’, ’ssd’, ’ncc’}
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the correlation window.
Default: 11
Suggested values: MaskWidth ∈ {5, 7, 9, 11, 21}
Restriction: 3 <= MaskWidth && odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the correlation window.
Default: 11
Suggested values: MaskHeight ∈ {5, 7, 9, 11, 21}
Restriction: 3 <= MaskHeight && odd(MaskHeight)
. TextureThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Variance threshold of textured image regions.
Default: 0.0
Suggested values: TextureThresh ∈ {0.0, 2.0, 5.0, 10.0}
Restriction: 0.0 <= TextureThresh

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 263

. MinDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer


Minimum of the expected disparities.
Default: 0
Value range: -32768 ≤ MinDisparity ≤ 32767
. MaxDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum of the expected disparities.
Default: 30
Value range: -32768 ≤ MaxDisparity ≤ 32767
. NumLevels (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of pyramid levels.
Default: 1
Suggested values: NumLevels ∈ {1, 2, 3, 4}
Restriction: 1 <= NumLevels
. ScoreThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Threshold of the correlation function.
Default: 0.0
Suggested values: ScoreThresh ∈ {0.0, 2.0, 5.0, 10.0}
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Downstream filters.
Default: ’none’
List of values: Filter ∈ {’none’, ’left_right_check’}
. SubDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Distance interpolation.
Default: ’none’
List of values: SubDistance ∈ {’none’, ’interpolation’}
Example

* Set internal and external stereo parameters.


* Note that, typically, these values are the result of a prior
* calibration.
gen_cam_par_area_scan_division (0.01, -665, 5.2e-006, 5.2e-006, \
622, 517, 1280, 1024, CamParam1)
gen_cam_par_area_scan_division (0.01, -731, 5.2e-006, 5.2e-006, \
654, 519, 1280, 1024, CamParam2)
create_pose (0.1535,-0.0037,0.0447,0.17,319.84,359.89, \
'Rp+T', 'gba', 'point', RelPose)
* Compute the mapping for rectified images.
gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, \
RelPose, 1, 'viewing_direction', \
'bilinear', CamParamRect1, CamParamRect2, \
Cam1PoseRect1, Cam2PoseRect2, RelPoseRect)
* Compute the distance values in online images.
while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageRect1)

grab_image_async (Image2, AcqHandle2, -1)


map_image (Image2, Map2, ImageRect2)

binocular_distance (ImageRect1, ImageRect2, Distance, Score, \


CamParamRect1, CamParamRect2, RelPoseRect, 'sad', \
11, 11, 20, -40, 20, 2, 25, \
'left_right_check', 'interpolation')
endwhile

Result
binocular_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion is raised.

HALCON 24.11.1.0
264 CHAPTER 5 3D RECONSTRUCTION

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
map_image
Possible Successors
threshold
Alternatives
binocular_distance_mg, binocular_distance_ms, binocular_disparity,
binocular_disparity_mg, binocular_disparity_ms
See also
map_image, gen_binocular_rectification_map, binocular_calibration,
distance_to_disparity, disparity_to_distance, disparity_image_to_xyz
Module
3D Metrology

binocular_distance_mg ( ImageRect1, ImageRect2 : Distance,


Score : CamParamRect1, CamParamRect2, RelPoseRect, GrayConstancy,
GradientConstancy, Smoothness, InitialGuess, CalculateScore,
MGParamName, MGParamValue : )

Compute the distance values for a rectified stereo image pair using multigrid methods.
binocular_distance_mg computes the distance values for a rectified stereo image pair using multi-
grid methods. The operator first calculates the disparities between two rectified images ImageRect1 and
ImageRect2 similar to binocular_disparity_mg. The resulting disparity values are then trans-
formed into distance values of the corresponding 3D world points to the rectified stereo camera system as in
disparity_to_distance. The distances are returned in the single-channel image Distance in which each
gray value represents the distance of the respective 3D world point to the stereo camera system. Different from
binocular_distance this operator uses a variational approach based on multigrid methods. This approach
returns distance values also for image parts that contain no texture.
The input images ImageRect1 and ImageRect2 must be a pair of rectified stereo images, i.e., corresponding
points must have the same row coordinate. In case this assumption is violated the images can be rectified by using
the operators calibrate_cameras, gen_binocular_rectification_map and map_image.
For the transformation of the disparity to the distance, the internal camera parameters of the rectified cam-
era 1 CamParamRect1 and of the rectified camera 2 CamParamRect2, as well as the relative pose of the
cameras RelPoseRect must be specified. The relative pose defines a point transformation from the recti-
fied camera system 2 to the rectified camera system 1. These parameters can be obtained from the operators
calibrate_cameras and gen_binocular_rectification_map.
A detailed description of the algorithm and of the remaining parameters can be found in the documentation of
binocular_disparity_mg.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, binocular_distance_mg does not support a stereo setup of mixed type. For stereo setups of
mixed type, please use reconstruct_surface_stereo, in which the reference coordinate system can be
chosen arbitrarily. Alternatively, binocular_disparity_mg and disparity_image_to_xyz might be
used.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 265

Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 2.
. Distance (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Distance image.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Score of the calculated disparity if CalculateScore is set to ’true’.
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. GrayConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gray value constancy in the data term.
Default: 1.0
Suggested values: GrayConstancy ∈ {0.0, 1.0, 2.0, 10.0}
Restriction: GrayConstancy >= 0.0
. GradientConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gradient constancy in the data term.
Default: 30.0
Suggested values: GradientConstancy ∈ {0.0, 1.0, 5.0, 10.0, 30.0, 50.0, 70.0}
Restriction: GradientConstancy >= 0.0
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the smoothness term in relation to the data term.
Default: 5.0
Suggested values: Smoothness ∈ {1.0, 3.0, 5.0, 10.0}
Restriction: Smoothness > 0.0
. InitialGuess (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Initial guess of the disparity.
Default: 0.0
Suggested values: InitialGuess ∈ {-30.0, -20.0, -10.0, 0.0, 10.0, 20.0, 30.0}
. CalculateScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should the quality measure be returned in Score?
Default: ’false’
Suggested values: CalculateScore ∈ {’true’, ’false’}
. MGParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Parameter name(s) for the multigrid algorithm.
Default: ’default_parameters’
List of values: MGParamName ∈ {’default_parameters’, ’mg_solver’, ’mg_cycle_type’, ’mg_pre_relax’,
’mg_post_relax’, ’initial_level’, ’pyramid_factor’, ’iterations’}
. MGParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Parameter value(s) for the multigrid algorithm.
Default: ’fast_accurate’
Suggested values: MGParamValue ∈ {’very_accurate’, ’accurate’, ’fast_accurate’, ’fast’, ’v’, ’w’, ’none’,
’gauss_seidel’, ’multigrid’, ’full_multigrid’, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 0.1, 0.2, 0.3,
0.4, 0.5, 0.6, 0.7, 0.8, 0.9, -1, -2, -3, -4, -5}
Result
If the parameter values are correct, binocular_distance_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.

HALCON 24.11.1.0
266 CHAPTER 5 3D RECONSTRUCTION

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
• Automatically parallelized on internal data level.
Possible Predecessors
map_image
Possible Successors
threshold
Alternatives
binocular_distance, binocular_distance_ms, binocular_disparity,
binocular_disparity_mg, binocular_disparity_ms
See also
map_image, gen_binocular_rectification_map, binocular_calibration,
disparity_to_distance, distance_to_disparity, disparity_image_to_xyz
Module
3D Metrology

binocular_distance_ms ( ImageRect1, ImageRect2 : Distance,


Score : CamParamRect1, CamParamRect2, RelPoseRect, MinDisparity,
MaxDisparity, SurfaceSmoothing, EdgeSmoothing, GenParamName,
GenParamValue : )

Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
binocular_distance_ms computes the distance values for a rectified stereo image pair using multi-
scanline optimization. The operator first calculates the disparities between two rectified images ImageRect1
and ImageRect2 similar to binocular_disparity_ms. The resulting disparity values are then trans-
formed into distance values of the corresponding 3D world points to the rectified stereo camera system as in
disparity_to_distance. The distances are returned in the single-channel image Distance in which each
gray value represents the distance of the respective 3D world point to the stereo camera system.
binocular_disparity_ms requires a reference image ImageRect1 and a search image ImageRect2
which both must be rectified, i.e., corresponding pixels must have the same row coordinate. If this
assumption is violated, the images can be rectified by using the operators calibrate_cameras,
gen_binocular_rectification_map, and map_image.
For the transformation of the disparity to the distance, the internal camera parameters of the rectified cam-
era 1 CamParamRect1 and of the rectified camera 2 CamParamRect2, as well as the relative pose of the
cameras RelPoseRect must be specified. The relative pose defines a point transformation from the recti-
fied camera system 2 to the rectified camera system 1. These parameters can be obtained from the operators
calibrate_cameras and gen_binocular_rectification_map.
A detailed description of the remaining parameters can be found in the documentation of
binocular_disparity_ms.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, binocular_distance_ms does not support a stereo setup of mixed type. For stereo setups of
mixed type, please use reconstruct_surface_stereo, in which the reference coordinate system can be

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 267

chosen arbitrarily. Alternatively, binocular_disparity_ms and disparity_image_to_xyz might be


used.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 2.
. Distance (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Distance image.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Score of the calculated disparity.
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. MinDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum of the expected disparities.
Default: -30
Value range: -32768 ≤ MinDisparity ≤ 32768
Restriction: MinDisparity <= MaxDisparity
. MaxDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum of the expected disparities.
Default: 30
Value range: -32768 ≤ MaxDisparity ≤ 32768
Restriction: MinDisparity <= MaxDisparity
. SurfaceSmoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Smoothing of surfaces.
Default: 50
Suggested values: SurfaceSmoothing ∈ {20, 50, 100}
Restriction: SurfaceSmoothing >= 0
. EdgeSmoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Smoothing of edges.
Default: 50
Suggested values: EdgeSmoothing ∈ {20, 50, 100}
Restriction: EdgeSmoothing >= 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Parameter name(s) for the multi-scanline algorithm.
Default: []
List of values: GenParamName ∈ {’similarity_measure’, ’disparity_offset’, ’num_levels’,
’consistency_check’, ’sub_disparity’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer
Parameter value(s) for the multi-scanline algorithm.
Default: []
Suggested values: GenParamValue ∈ {’census_dense’, ’census_sparse’, ’true’, ’false’}
Result
If the parameter values are correct, binocular_distance_ms returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Execution Information

HALCON 24.11.1.0
268 CHAPTER 5 3D RECONSTRUCTION

• Supports OpenCL compute devices.


• Multithreading type: reentrant (runs in parallel with non-exclusive operators).
• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
• Automatically parallelized on internal data level.
Possible Predecessors
map_image
Possible Successors
threshold
Alternatives
binocular_distance, binocular_distance_mg, binocular_disparity,
binocular_disparity_mg, binocular_disparity_ms
See also
map_image, gen_binocular_rectification_map, binocular_calibration,
disparity_to_distance, distance_to_disparity, disparity_image_to_xyz
Module
3D Metrology

disparity_image_to_xyz ( Disparity : X, Y, Z : CamParamRect1,


CamParamRect2, RelPoseRect : )

Transform a disparity image into 3D points in a rectified stereo system.


Given the disparity image Disparity of a rectified binocular stereo system, disparity_image_to_xyz
computes the corresponding 3D points. Their coordinates relative to the rectified camera 1 are stored as gray
values in the images X, Y, and Z, i.e., the pixels at the position (Row,Column) in X, Y, and Z contain the x, y, and
z coordinate, respectively, of the pixel (Row,Column) in the disparity image.
The rectified binocular camera system is specified by its internal camera parameters CamParamRect1 of the
rectified camera 1 and CamParamRect2 of the rectified camera 2, and the external parameters RelPoseRect.
The latter one is a pose in the form ccsR1 PccsR2 , thus it defines the relative pose of the rectified camera coordinate
system 2 (ccsR2) relative to the rectified camera coordinate system 1 (ccsR1) (see Transformations / Poses and
“Solution Guide III-C - 3D Vision”). These camera parameters can be obtained from the operators
calibrate_cameras and gen_binocular_rectification_map.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported.
Parameters
. Disparity (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Disparity image.
. X (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
X coordinates of the points in the rectified camera system 1.
. Y (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Y coordinates of the points in the rectified camera system 1.
. Z (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Z coordinates of the points in the rectified camera system 1.
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the rectified camera 2 in relation to the rectified camera 1.
Number of elements: 7

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 269

Example

disparity_image_to_xyz (ImageDisparity, ImgX, ImgY, ImgZ, RectCamParL, \


RectCamParR, RectLPosRectR)
get_region_points (ImageDisparity, Rows, Columns)
get_grayval (ImgX, Rows, Columns, XValues)
get_grayval (ImgY, Rows, Columns, YValues)
get_grayval (ImgZ, Rows, Columns, ZValues)

Result
The operator disparity_image_to_xyz returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input image available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
• Automatically parallelized on domain level.
Possible Predecessors
binocular_disparity
Possible Successors
threshold, write_image
Alternatives
disparity_to_point_3d, binocular_distance
See also
binocular_calibration, gen_binocular_rectification_map,
intersect_lines_of_sight
Module
3D Metrology

disparity_to_distance ( : : CamParamRect1, CamParamRect2,


RelPoseRect, Disparity : Distance )

Transform a disparity value into a distance value in a rectified binocular stereo system.
disparity_to_distance transforms a disparity value into a distance of an object point to the binocular
stereo system. The cameras of this system must be rectified and are defined by the rectified internal parameters
CamParamRect1 of camera 1 and CamParamRect2 of camera 2, and the external parameters RelPoseRect.
Latter specifies the relative pose of both cameras to each other by defining a point transformation from rec-
tified camera system 2 to rectified camera system 1. These parameters can be obtained from the operator
calibrate_cameras and gen_binocular_rectification_map. The disparity value Disparity
is defined by the column difference of the image coordinates of two corresponding points on an epipolar line ac-
cording to the equation d = c2 − c1 (see also binocular_disparity). This value characterises a set of 3D
object points of an equal distance to a plane being parallel to the rectified image plane of the stereo system. The
distance to the subset plane z = 0 which is parallel to the rectified image plane and contains the optical centers of
both cameras is returned in Distance.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying

HALCON 24.11.1.0
270 CHAPTER 5 3D RECONSTRUCTION

plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, disparity_to_distance does not support stereo setups of mixed type. For stereo setups of mixed
type, disparity_to_point_3d should be used instead.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. Disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Disparity between the images of the world point.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Distance of a world point to the rectified camera system.
Result
disparity_to_distance returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map, map_image,
binocular_disparity
Alternatives
binocular_distance
See also
distance_to_disparity, disparity_to_point_3d
Module
3D Metrology

disparity_to_point_3d ( : : CamParamRect1, CamParamRect2,


RelPoseRect, Row1, Col1, Disparity : X, Y, Z )

Transform an image point and its disparity into a 3D point in a rectified stereo system.
Given an image point of the rectified camera 1, specified by its image coordinates (Row1,Col1), and its disparity in
a rectified binocular stereo system, disparity_to_point_3d computes the corresponding three dimensional
object point. The disparity value Disparity defines the column difference of the image coordinates of two
corresponding features on an epipolar line according to the equation d = c2 − c1 . The rectified binocular camera
system is specified by its internal camera parameters CamParamRect1 of camera 1 and CamParamRect2 of
camera 2, and the external parameters RelPoseRect defining the pose of the rectified camera 2 in relation to
the rectified camera 1. These camera parameters can be obtained from the operators calibrate_cameras and
gen_binocular_rectification_map. The 3D point is returned in Cartesian coordinates (X,Y,Z) of the
rectified camera system 1.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 271

Parameters
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the rectified camera 2 in relation to the rectified camera 1.
Number of elements: 7
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate of a point in the rectified image 1.
. Col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate of a point in the rectified image 1.
. Disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Disparity of the images of the world point.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the 3D point.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the 3D point.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the 3D point.
Result
disparity_to_point_3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map
Alternatives
disparity_image_to_xyz
See also
binocular_disparity, binocular_distance, intersect_lines_of_sight
Module
3D Metrology

distance_to_disparity ( : : CamParamRect1, CamParamRect2,


RelPoseRect, Distance : Disparity )

Transform a distance value into a disparity in a rectified stereo system.


distance_to_disparity transforms a distance of a 3D point to the binocular stereo system into a dis-
parity value. The cameras of this system must be rectified and are defined by the rectified internal parame-
ters CamParamRect1 of the camera 1 and CamParamRect2 of the camera 2 and the external parameters
RelPoseRect. The latter specifies the relative pose of both camera systems to each other by defining a point
transformation from the rectified camera system 2 to the rectified camera system 1. These parameters can be
obtained from the operator calibrate_cameras and gen_binocular_rectification_map. The dis-
tance value is passed in Distance and the resulting disparity value Disparity is defined by the column
difference of the image coordinates of two corresponding features on an epipolar line according to the equation
d = c2 − c1 .
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera

HALCON 24.11.1.0
272 CHAPTER 5 3D RECONSTRUCTION

but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For stereo setups of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, distance_to_disparity does not support stereo setups of mixed type.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters

. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string


Rectified internal camera parameters of camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. Distance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Distance of a world point to camera 1.
. Disparity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Disparity between the images of the point.
Result
distance_to_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
binocular_calibration, gen_binocular_rectification_map
Possible Successors
binocular_disparity
Module
3D Metrology

essential_to_fundamental_matrix ( : : EMatrix, CovEMat, CamMat1,


CamMat2 : FMatrix, CovFMat )

Compute the fundamental matrix from an essential matrix.


The fundamental matrix is the entity describing the epipolar constraint in image coordinates (C,R) and the essential
matrix is its counterpart for 3D direction vectors (X,Y,1):
 T    T  
C2 C1 X2 X1
 R2  · FMatrix ·  R1  = 0 and  Y2  · EMatrix ·  Y1  = 0 .
1 1 1 1

Image coordinates result from 3D direction vectors by multiplication with the camera matrix CamM at:
   
col X
 row  = CamM at ·  Y  .
1 1

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 273

Therefore, the fundamental matrix FMatrix is calculated from the essential matrix EMatrix and the camera
matrices CamMat1, CamMat2 by the following formula:

FMatrix = CamMat2−T · EMatrix · CamMat1−1 .

The transformation of the essential matrix to the fundamental matrix goes along with the propagation of the co-
variance matrices CovEMat to CovFMat. If CovEMat is empty CovFMat will be empty too.
The conversion operator essential_to_fundamental_matrix is used especially for a subsequent visual-
ization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameters

. EMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer


Essential matrix.
. CovEMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
9 × 9 covariance matrix of the essential matrix.
Default: []
. CamMat1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer
Camera matrix of the 1. camera.
. CamMat2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer
Camera matrix of the 2. camera.
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
vector_to_essential_matrix
Alternatives
rel_pose_to_fundamental_matrix
Module
3D Metrology

gen_binocular_proj_rectification ( : Map1, Map2 : FMatrix,


CovFMat, Width1, Height1, Width2, Height2, SubSampling,
Mapping : CovFMatRect, H1, H2 )

Compute the projective rectification of weakly calibrated binocular stereo images.


A binocular stereo setup is called weakly calibrated if the fundamental matrix, which describes the projective
relation between the two images, is known. Rectification is the process of finding a suitable set of transformations,
that transform both images such that all corresponding epipolar lines become collinear and parallel to the horizontal
axes. The rectified images can be thought of as acquired by a stereo configuration where the left and right image
plane are identical and the difference between both image centers is a horizontal translation. Note that rectification
can only be performed if both of the epipoles are located outside the images.
Typically, the fundamental matrix is calculated beforehand with match_fundamental_matrix_ransac
and FMatrix is the basis for the computation of the two homographies H1 and H2, which describe the rectifi-
cations for the left image and the right image respectively. Since a projective rectification is an underdetermined
problem, additional constraints are defined: the algorithm chooses the set of homographies that minimizes the
projective distortion induced by the homographies in both images. For the computation of this cost function the

HALCON 24.11.1.0
274 CHAPTER 5 3D RECONSTRUCTION

dimensions of the images must be provided in Width1, Height1, Width2, Height2. After rectification the
fundamental matrix is always of the canonical form
 
0 0 0
 0 0 −1  .
0 1 0

In the case of a known covariance matrix CovFMat of the fundamental matrix FMatrix, the covariance matrix
CovFMatRect of the above rectified fundamental matrix is calculated. This can help for an improved stereo
matching process because the covariance matrix defines in terms of probabilities the image domain where to find
a corresponding match.
Similar to the operator gen_binocular_rectification_map the output images Map1 and Map2 describe
the transformation, also called mapping, of the original images to the rectified ones. The parameter Mapping
specifies whether bilinear interpolation (’bilinear_map’) should be applied between the pixels in the input image
or whether the gray value of the nearest neighboring pixel should be taken (’nn_map’). The size and resolution
of the maps and of the transformed images can be adjusted by the parameter SubSampling, which applies a
sub-sampling factor to the original images. For example, a factor of two will halve the image sizes. If just the two
homographies are required Mapping can be set to ’no_map’ and no maps will be returned. For speed reasons,
this option should be used if for a specific stereo configuration the images must be rectified only once. If the stereo
setup is fixed, the maps should be generated only once and both images should be rectified with map_image; this
will result in the smallest computational cost for on-line rectification.
When using the maps, the transformed images are of the same size as their maps. Each pixel in the map contains
the description of how the new pixel at this position is generated. The images Map1 and Map2 are single channel
images if Mapping is set to ’nn_map’ and five channel images if it is set to ’bilinear_map’. In the first channel,
which is of type int4, the pixels contain the linear coordinates of their reference pixels in the original image. With
Mapping equal to ’no_map’ this reference pixel is the nearest neighbor to the back-transformed pixel coordinates
of the map. In the case of bilinear interpolation the reference pixel is the next upper left pixel relative to the back-
transformed coordinates. The following scheme shows the ordering of the pixels in the original image next to the
back-transformed pixel coordinates, where the reference pixel takes the number 2.

2 3
4 5

The channels 2 to 5, which are of type uint2, contain the weights of the relevant pixels for the bilinear interpolation.
Based on the rectified images, the disparity be computed using binocular_disparity. In contrast to stereo
with fully calibrated cameras, using the operator gen_binocular_rectification_map and its succes-
sors, metric depth information can not be derived for weakly calibrated cameras. The disparity map gives just a
qualitative depth ordering of the scene.
Parameters
. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : int4 / uint2
Image coding the rectification of the 1. image.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : int4 / uint2
Image coding the rectification of the 2. image.
. FMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer
Fundamental matrix.
. CovFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
9 × 9 covariance matrix of the fundamental matrix.
Default: []
. Width1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the 1. image.
Default: 512
Suggested values: Width1 ∈ {128, 256, 512, 1024}
Restriction: Width1 > 0
. Height1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the 1. image.
Default: 512
Suggested values: Height1 ∈ {128, 256, 512, 1024}
Restriction: Height1 > 0

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 275

. Width2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer


Width of the 2. image.
Default: 512
Suggested values: Width2 ∈ {128, 256, 512, 1024}
Restriction: Width2 > 0
. Height2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the 2. image.
Default: 512
Suggested values: Height2 ∈ {128, 256, 512, 1024}
Restriction: Height2 > 0
. SubSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Subsampling factor.
Default: 1
List of values: SubSampling ∈ {1, 2, 3, 1.5}
. Mapping (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of mapping.
Default: ’no_map’
List of values: Mapping ∈ {’no_map’, ’nn_map’, ’bilinear_map’}
. CovFMatRect (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
9 × 9 covariance matrix of the rectified fundamental matrix.
. H1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Projective transformation of the 1. image.
. H2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Projective transformation of the 2. image.
Example

* Rectify an image pair using a map.


get_image_size (Image1, Width1, Height1)
get_image_size (Image2, Width2, Height2)
points_harris (Image1, 3, 1, 0.2, 10000, Row1, Col1)
points_harris (Image2, 3, 1, 0.2, 10000, Row2, Col2)
match_fundamental_matrix_ransac (Image1, Image2, Row1, Col1, Row2, Col2, \
'ncc', 21, 0, 200, 20, 50, 0, 0.9, \
'gold_standard', 0.3, 1, FMatrix, \
CovFMat, Error, Points1, Points2)
gen_binocular_proj_rectification (Map1, Map2, FMatrix, [], Width1, \
Height1, Width2, Height2, 1, \
'bilinear_map', CovFMatRect, H1, H2)
map_image (Image1, Map1, Image1Rect)
map_image (Image2, Map2, Image2Rect)

* Rectify an image pair without using a map.


get_image_size (Image1, Width1, Height1)
get_image_size (Image2, Width2, Height2)
points_harris (Image1, 3, 1, 0.2, 10000, Row1, Col1)
points_harris (Image2, 3, 1, 0.2, 10000, Row2, Col2)
match_fundamental_matrix_ransac (Image1, Image2, Row1, Col1, Row2, Col2, \
'ncc', 21, 0, 200, 20, 50, 0, 0.9, \
'gold_standard', 0.3, 1, FMatrix, \
CovFMat, Error, Points1, Points2)
gen_binocular_proj_rectification (Map1, Map2, FMatrix, [], Width1, \
Height1, Width2, Height2, 1, \
'no_map', CovFMatRect, H1, H2)
* Determine the maximum extent of the two rectified images.
projective_trans_point_2d (H1, [0,0,Height1,Height1], \
[0,Width1,0,Width1], [1,1,1,1], R1, C1, W1)
R1 := int(floor(R1/W1))
C1 := int(floor(C1/W1))

HALCON 24.11.1.0
276 CHAPTER 5 3D RECONSTRUCTION

projective_trans_point_2d (H2, [0,0,Height2,Height2], \


[0,Width2,0,Width2], [1,1,1,1], R2, C2, W2)
R2 := int(floor(R2/W2))
C2 := int(floor(C2/W2))
WidthRect := max([C1,C2])
HeightRect := max([R1,R2])
projective_trans_image_size (Image1, Image1Rect, H1, 'bilinear', \
WidthRect, HeightRect, 'false')
projective_trans_image_size (Image2, Image2Rect, H2, 'bilinear', \
WidthRect, HeightRect, 'false')

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
match_fundamental_matrix_ransac, vector_to_fundamental_matrix
Possible Successors
map_image, projective_trans_image, binocular_disparity
Alternatives
gen_binocular_rectification_map
References
J. Gluckmann and S.K. Nayar: “Rectifying transformations that minimize resampling effects”; IEEE Conference
on Computer Vision and Pattern Recognition (CVPR) 2001, vol I, pages 111-117.
Module
3D Metrology

gen_binocular_rectification_map ( : Map1, Map2 : CamParam1,


CamParam2, RelPose, SubSampling, Method, MapType : CamParamRect1,
CamParamRect2, CamPoseRect1, CamPoseRect2, RelPoseRect )

Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common
rectified image plane.
Given a pair of stereo images, rectification determines a transformation of each image plane in a way that
pairs of conjugate epipolar lines become collinear and parallel to the horizontal image axes. This is required
for an efficient calculation of disparities or distances with operators such as binocular_disparity or
binocular_distance. The rectified images can be thought of as acquired by a new stereo rig, obtained
by rotating and, in case of telecentric area scan and line scan cameras, translating the original cameras. The projec-
tion centers (i.e., in the telecentric case, the direction of the optical axes) are maintained. For perspective cameras,
the image planes are additionally transformed into a common plane, which means that the focal lengths are set
equal, and the optical axes are parallel. For a stereo setup of mixed type (i.e., one perspective and one telecentric
camera), the image planes are also transformed into a common plane, as described below.
To achieve the transformation map for rectified images gen_binocular_rectification_map requires
the internal camera parameters CamParam1 of camera 1 and CamParam2 of camera 2, as well as the relative
pose RelPose, ccs1 Pccs2 , defining a point transformation from camera coordinate system 2 (ccs2) into camera
coordinate system 1 (ccs1), see Transformations / Poses and “Solution Guide III-C - 3D Vision”.
These parameters can be obtained, e.g., from the operator calibrate_cameras.
The internal camera parameters, modified by the rectification, are returned in CamParamRect1 for camera 1 and
CamParamRect2 for camera 2, respectively. The rotation and, in case of telecentric cameras, translation of the
rectified camera in relation to the original camera is specified by CamPoseRect1 and CamPoseRect2, respec-
tively. These poses are in the form ccsX PccsRX with ccsX: camera coordinate system of camera X and ccsRX:
camera coordinate system of camera X for the rectified image. Finally, RelPoseRect returns ccsR1 PccsR2 , the

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 277

relative pose of the rectified camera coordinate system 2 (ccsR2) relative to the rectified camera coordinate system
1 (ccsR1).
Rectification Method
For perspective area scan cameras, RelPoseRect only has a translation in x. Generally, the transformations are
defined in a way that the rectified camera 1 is left of the rectified camera 2. This means that the optical center of
camera 2 has a positive x coordinate of the rectified coordinate system of camera 1.
The projection onto a common plane has many degrees of freedom, which are implicitly restricted by selecting a
certain method in Method:

• ’viewing_direction’ uses the baseline as the x-axis of the common image plane. The mean of the viewing
directions (z-axes) of the two cameras is used to span the x-z plane of the rectified system. The resulting
rectified z-axis is the orientation of the common image plane and as such located in this plane and orthogonal
to the baseline. In many cases, the resulting rectified z-axis will not differ much from the mean of the two old
z-axes. The new focal length is determined in such a way that the old principal points have the same distance
to the new common image plane. The different z-axes directions are illustrated in the schematic below.

z1 z2
z1 z2
_
_ z z_
z_
_
z
_

(1) (2)
Illustration for the different z-axes directions using ’viewing_direction’. (1): View facing the base line (in
orange). (2): View along the base line (pointing into the page, in orange).

• ’geometric’ specifies the orientation of the common image plane by the cross product of the baseline and the
line of intersection of the original image planes. The new focal length is determined in such a way that the
old principal points have the same distance to the new common image plane.

For telecentric area scan and line scan cameras, the parameter Method is ignored. The relative pose of both
cameras is not uniquely defined in such a system since the cameras return identical images no matter how they
are translated along their optical axis. Yet, in order to define an absolute distance measurement to the cameras, a
standard position of both cameras is considered. This position is defined as follows: Both cameras are translated
along their optical axes until their distance is one meter and until the line between the cameras (baseline) forms the
same angle with both optical axes (i.e., the baseline and the optical axes form an isosceles triangle). The optical
axes remain unchanged. The relative pose of the rectified cameras RelPoseRect may be different from the
relative pose of the original cameras RelPose.
For a stereo setup of mixed type (i.e., one perspective and one telecentric camera), the parameter Method is
ignored. The rectified image plane is determined uniquely from the geometry of the perspective camera and the
relative pose of the two cameras. The normal of the rectified image plane is the vector that points from the
projection center of the perspective camera to the point on the optical axis of the telecentric camera that has the
shortest distance from the projection center of the perspective camera. This is also the z-axis of the rectified
perspective camera. The geometric base of the mixed camera system is a line that passes through the projection
center of the perspective camera and has the same direction as the z-axis of the telecentric camera, i.e., the base
is parallel to the viewing direction of the telecentric camera. The x-axis of the rectified perspective camera is
given by the base and the y-axis is constructed to form a right-handed coordinate system. To rectify the telecentric
camera, its optical axis must be shifted to the base and the image plane must be tilted by 90◦ or −90◦ . To
achieve this, a special type of object-side telecentric camera that is able to handle this special rectification geometry
(indicated by a negative image plane distance ImagePlaneDist) must be used for the rectified telecentric
camera. The representation of this special camera type should be regarded as a black box because it is used only
for rectification purposes in HALCON (for this reason, it is not documented in camera_calibration). The
rectified telecentric camera has the same orientation as the original telecentric camera, while its origin is translated
to a point on the base.

HALCON 24.11.1.0
278 CHAPTER 5 3D RECONSTRUCTION

Rectification Maps
The mapping functions for the images of camera 1 and camera 2 are returned in the images Map1 and Map2.
MapType is used to specify the type of the output maps. If ’nearest_neighbor’ is chosen, both maps consist of one
image containing one channel, in which for each pixel of the resulting image the linearized coordinate of the pixel
of the input image is stored that is the nearest neighbor to the transformed coordinates. If ’bilinear’ interpolation
is chosen, both maps consists of one image containing five channels. In the first channel for each pixel in the
resulting image the linearized coordinates of the pixel in the input image is stored that is in the upper left position
relative to the transformed coordinates. The four other channels contain the weights of the four neighboring pixels
of the transformed coordinates which are used for the bilinear interpolation, in the following order:

2 3
4 5

The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates. If ’coord_map_sub_pix’ is chosen, both maps consist of one vector field image, in which for
each pixel of the resulting image the subpixel precise coordinates in the input image are stored.
The size and resolution of the maps and of the transformed images can be adjusted by the SubSampling param-
eter which applies a sub-sampling factor to the original images.
If you want to re-use the created map in another program, you can save it as a multi-channel image with the
operator write_image, using the format ’tiff’.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported.
Parameters
. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / uint2 / vector_field
Image containing the mapping data of camera 1.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / uint2 / vector_field
Image containing the mapping data of camera 2.
. CamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of camera 1.
. CamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of camera 2.
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from camera 2 to camera 1.
Number of elements: 7
. SubSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Subsampling factor.
Default: 1.0
Suggested values: SubSampling ∈ {0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0}
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of rectification.
Default: ’viewing_direction’
List of values: Method ∈ {’viewing_direction’, ’geometric’}
. MapType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of mapping.
Default: ’bilinear’
List of values: MapType ∈ {’nearest_neighbor’, ’bilinear’, ’coord_map_sub_pix’}
. CamParamRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal parameters of camera 1.
. CamParamRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal parameters of camera 2.
. CamPoseRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements: 7
. CamPoseRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements: 7

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 279

. RelPoseRect (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer


Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
Example

* Set internal and external stereo parameters.


* Note that, typically, these values are the result of a prior
* calibration.
gen_cam_par_area_scan_division (0.01, -665, 5.2e-006, 5.2e-006, \
622, 517, 1280, 1024, CamParam1)
gen_cam_par_area_scan_division (0.01, -731, 5.2e-006, 5.2e-006, \
654, 519, 1280, 1024, CamParam2)
create_pose (0.1535,-0.0037,0.0447,0.17,319.84,359.89, \
'Rp+T', 'gba', 'point', RelPose)

* Compute the mapping for rectified images.


gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, \
RelPose, 1,'viewing_direction', 'bilinear',\
CamParamRect1, CamParamRect2, \
CamPoseRect1, CamPoseRect2, \
RelPoseRect)

* Compute the disparities in online images.


while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageMapped1)

grab_image_async (Image2, AcqHandle2, -1)


map_image (Image2, Map2, ImageMapped2)

binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, \


'sad', 11, 11, 20, -40, 20, 2, 25, \
'left_right_check', 'interpolation')
endwhile

Result
gen_binocular_rectification_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
binocular_calibration
Possible Successors
map_image
Alternatives
gen_image_to_world_plane_map
See also
map_image, binocular_disparity, binocular_distance, binocular_disparity_mg,
binocular_distance_mg, binocular_disparity_ms, binocular_distance_ms,
gen_image_to_world_plane_map, contour_to_world_plane_xld,
image_points_to_world_plane
Module
3D Metrology

HALCON 24.11.1.0
280 CHAPTER 5 3D RECONSTRUCTION

intersect_lines_of_sight ( : : CamParam1, CamParam2, RelPose,


Row1, Col1, Row2, Col2 : X, Y, Z, Dist )

Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Given two lines of sight from different cameras, specified by their image points (Row1,Col1) of camera 1 and
(Row2,Col2) of camera 2, intersect_lines_of_sight computes the 3D point of intersection of these
lines. The binocular camera system is specified by its internal camera parameters CamParam1 of the projective
camera 1 and CamParam2 of the projective camera 2, and the external parameters RelPose. Latter one is of the
form ccs1 Pccs2 and characterizes the relative pose of both cameras to each other, thus defining a point transforma-
tion from camera coordinate system 2 (ccs2) into camera coordinate system 1 (ccs1), see Transformations / Poses
and “Solution Guide III-C - 3D Vision”. These camera parameters can be obtained, e.g., from the
operator calibrate_cameras, if the coordinates of the image points (Row1,Col1) and (Row2,Col2) re-
fer to the respective original image coordinate system. In case of rectified image coordinates ( e.g., obtained
from rectified images), the rectified camera parameters must be passed, as they are returned by the operator
gen_binocular_rectification_map. The ’point of intersection’ is defined by the point with the shortest
distance to both lines of sight. This point is returned in Cartesian coordinates (X,Y,Z) of camera system 1 and its
distance to the lines of sight is passed in Dist.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported.
Parameters
. CamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of the projective camera 1.
. CamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of the projective camera 2.
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from camera 2 to camera 1.
Number of elements: 7
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate of a point in image 1.
. Col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate of a point in image 1.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate of the corresponding point in image 2.
. Col2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate of the corresponding point in image 2.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the 3D point.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the 3D point.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the 3D point.
. Dist (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Distance of the 3D point to the lines of sight.
Result
intersect_lines_of_sight returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
binocular_calibration

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 281

See also
disparity_to_point_3d
Module
3D Metrology

match_essential_matrix_ransac ( Image1, Image2 : : Rows1, Cols1,


Rows2, Cols2, CamMat1, CamMat2, GrayMatchMethod, MaskSize,
RowMove, ColMove, RowTolerance, ColTolerance, Rotation,
MatchThreshold, EstimationMethod, DistanceThreshold,
RandSeed : EMatrix, CovEMat, Error, Points1, Points2 )

Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2 along with known internal camera parameters, specified by the camera matrices CamMat1
and CamMat2, match_essential_matrix_ransac automatically determines the geometry of the stereo
setup and finds the correspondences between the characteristic points. The geometry of the stereo setup is repre-
sented by the essential matrix EMatrix and all corresponding points have to fulfill the epipolar constraint.
The operator match_essential_matrix_ransac is designed to deal with a linear camera model. The
internal camera parameters are passed by the arguments CamMat1 and CamMat2, which are 3×3 upper triangular
matrices describing an affine transformation. The relation between a vector (X,Y,1), representing the direction from
the camera to the viewed 3D space point and its (projective) 2D image coordinates (col,row,1) is:
     
col X f /sx s cx
 row  = CamM at ·  Y  where CamM at =  0 f /sy cy  .
1 1 0 0 1

Note the column/row ordering in the point coordinates which has to be compliant with the x/y notation of the
camera coordinate system. The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor
and (cx , cy ) indicates the principal point. Mainly, these are the elements known from the camera parameters as
used for example in calibrate_cameras. Alternatively, the elements of the camera matrix can be described
in a different way, see e.g. stationary_camera_self_calibration. Multiplied by the inverse of the
camera matrices the direction vectors in 3D space are obtained from the (projective) image coordinates. For known
camera matrices the epipolar constraint is given by:
 T  
X2 X1
 Y2  · EM atrix ·  Y1  = 0 .
1 1

The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the essential matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be se-
lected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’
means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer to
binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matching operations can be limited. Only points
within a window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of
the search window in the second image with respect to the position of the current point in the first image is given
by RowMove and ColMove.

HALCON 24.11.1.0
282 CHAPTER 5 3D RECONSTRUCTION

If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the essen-
tial matrix EMatrix. It tries to find the essential matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns the
covariance of the essential matrix CovEMat as well. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-
linear-transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences
differ depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean Euclidean distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_essential_matrix_ransac a special configuration of scene points and cameras
exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution
in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that the output parameters EMatrix, CovEMat and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system)
does not affect the results of match_essential_matrix_ransac.
Parameters
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 1.
Restriction: length(Rows1) >= 6 || length(Rows1) >= 3
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 1.
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 2.
Restriction: length(Rows2) >= 6 || length(Rows2) >= 3
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 2.
Restriction: length(Cols2) == length(Rows2)

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 283

. CamMat1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer


Camera matrix of the 1st camera.
. CamMat2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer
Camera matrix of the 2nd camera.
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Gray value comparison metric.
Default: ’ssd’
List of values: GrayMatchMethod ∈ {’ssd’, ’sad’, ’ncc’}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of gray value masks.
Default: 10
Suggested values: MaskSize ∈ {3, 7, 15}
Value range: 1 ≤ MaskSize
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average row coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average column coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half height of matching search window.
Default: 200
Value range: 1 ≤ RowTolerance
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half width of matching search window.
Default: 200
Value range: 1 ≤ ColTolerance
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real / integer
Estimate of the relative orientation of the right image with respect to the left image.
Default: 0.0
Suggested values: Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Threshold for gray value matching.
Default: 10
Suggested values: MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Algorithm for the computation of the essential matrix and for special camera orientations.
Default: ’normalized_dlt’
List of values: EstimationMethod ∈ {’normalized_dlt’, ’gold_standard’, ’trans_normalized_dlt’,
’trans_gold_standard’}
. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Maximal deviation of a point from its epipolar line.
Default: 1
Value range: 0.5 ≤ DistanceThreshold ≤ 5
Restriction: DistanceThreshold > 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed for the random number generator.
Default: 0
. EMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Computed essential matrix.
. CovEMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the essential matrix.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Root-Mean-Square of the epipolar distance error.

HALCON 24.11.1.0
284 CHAPTER 5 3D RECONSTRUCTION

. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer


Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of matched input points in image 2.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_essential_matrix
See also
match_fundamental_matrix_ransac, match_rel_pose_ransac,
stationary_camera_self_calibration
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

match_fundamental_matrix_distortion_ransac ( Image1,
Image2 : : Rows1, Cols1, Rows2, Cols2, GrayMatchMethod,
MaskSize, RowMove, ColMove, RowTolerance, ColTolerance, Rotation,
MatchThreshold, EstimationMethod, DistanceThreshold,
RandSeed : FMatrix, Kappa, Error, Points1, Points2 )

Compute the fundamental matrix and the radial distortion coefficient for a pair of stereo images by automatically
finding correspondences between image points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2)
in the stereo images Image1 and Image2, which must be of identical size,
match_fundamental_matrix_distortion_ransac automatically finds the correspondences be-
tween the characteristic points and determines the geometry of the stereo setup. For unknown cameras the
geometry of the stereo setup is represented by the fundamental matrix FMatrix and the radial distortion
coefficient Kappa (κ). All corresponding points must fulfill the epipolar constraint:
 T  
c2 c1
 r2  · FMatrix ·  r1  = 0 .
1 1

Here, (r1 , c1 ) and (r2 , c2 ) denote image points that are obtained by undistorting the input image points with the
division model (see Calibration):

r̃ c̃
r= c=
1 + κ(r̃2 + c̃2 ) 1 + κ(r̃2 + c̃2 )

Here, (r̃1 , c̃1 ) = (Rows1 − 0.5(h − 1), Cols1 − 0.5(w − 1))


and (r̃2 , c̃2 ) = (Rows2 − 0.5(h − 1), Cols2 − 0.5(w − 1))

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 285

denote the distorted image points, specified relative to the image center, and w and h denote the width and height of
the input images. Thus, match_fundamental_matrix_distortion_ransac assumes that the principal
point of the camera, i.e., the center of the radial distortions, lies at the center of the image.
The returned Kappa can be used to construct camera parameters that can be used to rectify images or
points (see change_radial_distortion_cam_par, change_radial_distortion_image, and
change_radial_distortion_points):

CamPar = [0 area_scan_telecentric_division 0 , 0.0, Kappa, 1.0, 1.0,


0.5(w − 1), 0.5(h − 1), w, h]

Note the column/row ordering in the point coordinates above: since the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation must be compliant with the camera
coordinate system. Therefore, (x,y) coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the fundamental matrix and radial distortion coefficient that maximizes the number of
correspondences under the epipolar constraint.
The size of the mask windows used for the matching is MaskSize×MaskSize. Three metrics for the correlation
can be selected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used,
’sad’ means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer
to binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A matching thus found is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm the search area for the match candidates can be limited to a rectangle by
specifying its size and offset. Only points within a window of 2 · RowTolerance × 2 · ColTolerance points
are considered. The offset of the center of the search window in the second image with respect to the position of
the current point in the first image is given by RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera, the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate, the matching will
typically fail. In this case, an angle interval should be specified and Rotation is a tuple with two elements. The
larger the given interval is the slower is the operator is since the RANSAC algorithm is run over all (automatically
determined) angle increments within the interval.
After the initial matching has been completed, a randomized search algorithm (RANSAC) is used to determine the
fundamental matrix FMatrix and the radial distortion coefficient Kappa. It tries to find the parameters that are
consistent with a maximum number of correspondences. For a point to be accepted, the distance in pixels to its
corresponding epipolar line must not exceed the threshold DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a spe-
cial type and which algorithm is to be applied for its computation. If EstimationMethod is either ’lin-
ear’ or ’gold_standard’, the relative orientation is arbitrary. If the left and right cameras are identical and the
relative orientation between them is a pure translation, EstimationMethod can be set to ’trans_linear’ or
’trans_gold_standard’. The typical application for this special motion case is the scenario of a single fixed cam-
era looking onto a moving conveyor belt. In order to get a unique solution for the correspondence problem, the
minimum required number of corresponding points is nine in the general case and four in the special translational
case.
The fundamental matrix is computed by a linear algorithm if EstimationMethod is set to ’linear’ or
’trans_linear’. This algorithm is very fast. For the pure translation case (EstimationMethod = ’trans_linear’),
the linear method returns accurate results for small to moderate noise of the point coordinates and for
most distortions (except for very small distortions). For a general relative orientation of the two cameras
(EstimationMethod = ’linear’), the linear method only returns accurate results for very small noise of
the point coordinates and for sufficiently large distortions. For EstimationMethod = ’gold_standard’ or
’trans_gold_standard’, a mathematically optimal but slower optimization is used, which minimizes the geometric
reprojection error of reconstructed projective 3D points. For a general relative orientation of the two cameras, in
general EstimationMethod = ’gold_standard’ should be selected.

HALCON 24.11.1.0
286 CHAPTER 5 3D RECONSTRUCTION

The value Error indicates the overall quality of the estimation procedure and is the mean symmetric Euclidean
distance in pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the above constraints are considered to be corresponding points. Points1 contains the
indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to
obtain reproducible results. If RandSeed is set to a positive number, the operator returns the same result on every
call with the same parameters because the internally used random number generator is initialized with RandSeed.
If RandSeed = 0, the random number generator is initialized with the current time. In this case the results may
not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system) does not
affect the results of match_fundamental_matrix_distortion_ransac.
Parameters
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 9 || length(Rows1) >= 4
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) >= 9 || length(Rows2) >= 4
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows2)
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Gray value match metric.
Default: ’ncc’
List of values: GrayMatchMethod ∈ {’ncc’, ’ssd’, ’sad’}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of gray value masks.
Default: 10
Suggested values: MaskSize ∈ {3, 7, 15}
Value range: 1 ≤ MaskSize
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average row coordinate offset of corresponding points.
Default: 0
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average column coordinate offset of corresponding points.
Default: 0
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half height of matching search window.
Default: 200
Restriction: RowTolerance >= 1
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half width of matching search window.
Default: 200
Restriction: ColTolerance >= 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real / integer
Estimate of the relative rotation of the second image with respect to the first image.
Default: 0.0
Suggested values: Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 287

. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real


Threshold for gray value matching.
Default: 0.7
Suggested values: MatchThreshold ∈ {0.9, 0.7, 0.5, 10, 20, 50, 100}
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Algorithm for the computation of the fundamental matrix and for special camera orientations.
Default: ’gold_standard’
List of values: EstimationMethod ∈ {’linear’, ’gold_standard’, ’trans_linear’, ’trans_gold_standard’}
. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Maximal deviation of a point from its epipolar line.
Default: 1
Restriction: DistanceThreshold > 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed for the random number generator.
Default: 0
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Computed fundamental matrix.
. Kappa (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Computed radial distortion coefficient.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Root-Mean-Square epipolar distance error.
. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of matched input points in image 2.
Example

points_foerstner (Image1, 1, 2, 3, 200, 0.1, 'gauss', 'true', \


Rows1, Cols1, _, _, _, _, _, _, _, _)
points_foerstner (Image2, 1, 2, 3, 200, 0.1, 'gauss', 'true', \
Rows2, Cols2, _, _, _, _, _, _, _, _)
match_fundamental_matrix_distortion_ransac (Image1, Image2, \
Rows1, Cols1, Rows2, \
Cols2, 'ncc', 10, 0, 0, \
100, 200, 0, 0.5, \
'trans_gold_standard', \
1, 42, FMatrix, Kappa, \
Error, Points1, Points2)
get_image_size (Image1, Width, Height)
CamParDist := ['area_scan_division',0.0,Kappa,1.0,1.0,\
0.5*(Width-1),0.5*Height-1,Width,Height]
change_radial_distortion_cam_par ('fixed', CamParDist, 0, CamPar)
change_radial_distortion_image (Image1, Image1, Image1Rect, \
CamParDist, CamPar)
change_radial_distortion_image (Image2, Image2, Image2Rect, \
CamParDist, CamPar)
gen_binocular_proj_rectification (Map1, Map2, FMatrix, [], Width, \
Height, Width, Height, 1, \
'bilinear_map', _, H1, H2)
map_image (Image1Rect, Map1, Image1Mapped)
map_image (Image2Rect, Map2, Image2Mapped)
binocular_disparity_mg (Image1Mapped, Image2Mapped, Disparity, \
Score, 1, 30, 8, 0, 'false', \
'default_parameters', 'fast_accurate')

Execution Information

HALCON 24.11.1.0
288 CHAPTER 5 3D RECONSTRUCTION

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_fundamental_matrix_distortion, change_radial_distortion_cam_par,
change_radial_distortion_image, change_radial_distortion_points,
gen_binocular_proj_rectification
See also
match_fundamental_matrix_ransac, match_essential_matrix_ransac,
match_rel_pose_ransac, proj_match_points_ransac, calibrate_cameras
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

match_fundamental_matrix_ransac ( Image1, Image2 : : Rows1,


Cols1, Rows2, Cols2, GrayMatchMethod, MaskSize, RowMove,
ColMove, RowTolerance, ColTolerance, Rotation, MatchThreshold,
EstimationMethod, DistanceThreshold, RandSeed : FMatrix, CovFMat,
Error, Points1, Points2 )

Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between
image points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2, match_fundamental_matrix_ransac automatically finds the correspondences
between the characteristic points and determines the geometry of the stereo setup. For unknown cameras the
geometry of the stereo setup is represented by the fundamental matrix FMatrix and all corresponding points
have to fulfill the epipolar constraint, namely:
 T  
Cols2 Cols1
 Rows2  · FMatrix ·  Rows1  = 0 .
1 1

Note the column/row ordering in the point coordinates: because the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation has to be compliant with the camera
coordinate system. So, (x,y) coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an initial
matching between them is generated using the similarity of the windows in both images. Then, the RANSAC algo-
rithm is applied to find the fundamental matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be se-
lected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’
means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer to
binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 289

To increase the speed of the algorithm the search area for the matching operations can be limited. Only points
within a window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of
the search window in the second image with respect to the position of the current point in the first image is given
by RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the fun-
damental matrix FMatrix. It tries to find the matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. If left and right camera are identical and the relative orien-
tation between them is a pure translation then choose EstimationMethod equal to ’trans_normalized_dlt’ or
’trans_gold_standard’. The typical application for this special motion case is the scenario of a single fixed camera
looking onto a moving conveyor belt. In order to get a unique solution in the correspondence problem the min-
imum required number of corresponding points is eight in the general case and three in the special, translational
case.
The fundamental matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as
well the covariance of the fundamental matrix CovFMat. Here, ’normalized_dlt’ and ’gold_standard’ stand for
direct-linear-transformation and gold-standard-algorithm respectively.
The value Error indicates the overall quality of the estimation procedure and is the mean Euclidean distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system)
does not affect the results of match_fundamental_matrix_ransac.
Parameters

. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2


Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 1.
Restriction: length(Rows1) >= 8 || length(Rows1) >= 3
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 1.
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 2.
Restriction: length(Rows2) >= 8 || length(Rows2) >= 3
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 2.
Restriction: length(Cols2) == length(Rows2)

HALCON 24.11.1.0
290 CHAPTER 5 3D RECONSTRUCTION

. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string


Gray value comparison metric.
Default: ’ssd’
List of values: GrayMatchMethod ∈ {’ssd’, ’sad’, ’ncc’}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of gray value masks.
Default: 10
Suggested values: MaskSize ∈ {3, 7, 15}
Value range: 1 ≤ MaskSize
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average row coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average column coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half height of matching search window.
Default: 200
Value range: 1 ≤ RowTolerance
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half width of matching search window.
Default: 200
Value range: 1 ≤ ColTolerance
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real / integer
Estimate of the relative orientation of the right image with respect to the left image.
Default: 0.0
Suggested values: Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Threshold for gray value matching.
Default: 10
Suggested values: MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}
. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Algorithm for the computation of the fundamental matrix and for special camera orientations.
Default: ’normalized_dlt’
List of values: EstimationMethod ∈ {’normalized_dlt’, ’gold_standard’, ’trans_normalized_dlt’,
’trans_gold_standard’}
. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Maximal deviation of a point from its epipolar line.
Default: 1
Value range: 0.5 ≤ DistanceThreshold ≤ 5
Restriction: DistanceThreshold > 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed for the random number generator.
Default: 0
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Root-Mean-Square of the epipolar distance error.
. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of matched input points in image 2.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 291

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_fundamental_matrix, gen_binocular_proj_rectification
See also
match_essential_matrix_ransac, match_rel_pose_ransac, proj_match_points_ransac
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

match_rel_pose_ransac ( Image1, Image2 : : Rows1, Cols1, Rows2,


Cols2, CamPar1, CamPar2, GrayMatchMethod, MaskSize, RowMove,
ColMove, RowTolerance, ColTolerance, Rotation, MatchThreshold,
EstimationMethod, DistanceThreshold, RandSeed : RelPose,
CovRelPose, Error, Points1, Points2 )

Compute the relative orientation between two cameras by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo
images Image1 and Image2 along with known internal camera parameters CamPar1 and CamPar2,
match_rel_pose_ransac automatically determines the geometry of the stereo setup and finds the corre-
spondences between the characteristic points. The geometry of the stereo setup is represented by the relative
pose RelPose and all corresponding points have to fulfill the epipolar constraint. RelPose indicates the rel-
ative pose of camera 1 with respect to camera 2 (See create_pose for more information about poses and
their representations.). This is in accordance with the explicit calibration of a stereo setup using the operator
calibrate_cameras. Now, let R, t be the rotation and translation of the relative pose. Then, the essential
matrix E is defined as E = ([t]× R)T , where [t]× denotes the 3 × 3 skew-symmetric matrix realizing the cross
product with the vector t. The pose can be determined from the epipolar constraint:
 T    
X2 X1 0 −tz ty
 Y2  · ([t]× R)T ·  Y1  = 0 where [t]× =  tz 0 −tx  .
1 1 −ty tx 0

Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that
the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation
vector will always be normalized to unit length. As a consequence, a subsequent three-dimensional reconstruction
of the scene, using for instance vector_to_rel_pose, can be carried out only up to a single global scaling
factor.
The operator match_rel_pose_ransac is designed to deal with a camera model, that includes lens distor-
tions. This is in contrast to the operator match_essential_matrix_ransac, which encompasses only
straight line preserving cameras. The camera parameters are passed in CamPar1 and CamPar2. The 3D
direction vectors (X1 , Y1 , 1) and (X2 , Y2 , 1) are calculated from the point coordinates (Rows1,Cols1) and
(Rows2,Cols2) by inverting the process of projection (see Calibration).

HALCON 24.11.1.0
292 CHAPTER 5 3D RECONSTRUCTION

The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the relative pose that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be se-
lected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’
means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer to
binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matching operations can be limited. Only points
within a window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of
the search window in the second image with respect to the position of the current point in the first image is given
by RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the rel-
ative pose RelPose. It tries to find the relative pose that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The relative pose is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen. With
’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as well the
covariance of the relative pose CovRelPose. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-linear-
transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences differ
depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean Euclidean distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_rel_pose_ransac a special configuration of scene points and cameras exists: if all
3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the
essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the
operator. This means that the output parameters RelPose, CovRelPose and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system)
does not affect the results of match_rel_pose_ransac.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 293

Parameters
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 1.
Restriction: length(Rows1) >= 6 || length(Rows1) >= 3
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 1.
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 2.
Restriction: length(Rows2) >= 6 || length(Rows2) >= 3
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 2.
Restriction: length(Cols2) == length(Rows2)
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Parameters of the 1st camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Parameters of the 2nd camera.
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Gray value comparison metric.
Default: ’ssd’
List of values: GrayMatchMethod ∈ {’ssd’, ’sad’, ’ncc’}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of gray value masks.
Default: 10
Suggested values: MaskSize ∈ {3, 7, 15}
Value range: 1 ≤ MaskSize
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average row coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average column coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half height of matching search window.
Default: 200
Value range: 1 ≤ RowTolerance
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half width of matching search window.
Default: 200
Value range: 1 ≤ ColTolerance
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real / integer
Estimate of the relative orientation of the right image with respect to the left image.
Default: 0.0
Suggested values: Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Threshold for gray value matching.
Default: 10
Suggested values: MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}

HALCON 24.11.1.0
294 CHAPTER 5 3D RECONSTRUCTION

. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string


Algorithm for the computation of the relative pose and for special pose types.
Default: ’normalized_dlt’
List of values: EstimationMethod ∈ {’normalized_dlt’, ’gold_standard’, ’trans_normalized_dlt’,
’trans_gold_standard’}
. DistanceThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Maximal deviation of a point from its epipolar line.
Default: 1
Value range: 0.5 ≤ DistanceThreshold ≤ 5
Restriction: DistanceThreshold > 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed for the random number generator.
Default: 0
. RelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Computed relative orientation of the cameras (3D pose).
. CovRelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
6 × 6 covariance matrix of the relative orientation.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Root-Mean-Square of the epipolar distance error.
. Points1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of matched input points in image 1.
. Points2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of matched input points in image 2.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_rel_pose, gen_binocular_rectification_map
See also
binocular_calibration, match_fundamental_matrix_ransac,
match_essential_matrix_ransac, create_pose
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

reconst3d_from_fundamental_matrix ( : : Rows1, Cols1, Rows2,


Cols2, CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2, FMatrix,
CovFMat : X, Y, Z, W, CovXYZW )

Compute the projective 3d reconstruction of points based on the fundamental matrix.


A pair of stereo images is called weakly calibrated if the fundamental matrix, which defines the geometric relation
between the two images, is known. Given such a fundamental matrix FMatrix and a set of corresponding points
(Rows1,Cols1) and (Rows2,Cols2) the operator reconst3d_from_fundamental_matrix determines
the three-dimensional space points projecting onto these image points. This 3D reconstruction is purely projective

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 295

and the projective coordinates are returned by the four-vector (X,Y,Z,W). This type of reconstruction is also known
as projective triangulation. If additionally the covariances CovRR1, CovRC1, CovCC1 and CovRR2, CovRC2,
CovCC2 of the image points are given the covariances of the reconstructed points CovXYZW are computed too.
Let n be the number of points. Then the concatenated covariances are stored in a 16 × n tuple. The computation
of the covariances is more precise if the covariance of the fundamental matrix CovFMat is provided.
The operator reconst3d_from_fundamental_matrix is typically used after
match_fundamental_matrix_ransac to perform 3d reconstruction. This will save computational
cost compared with the deployment of vector_to_fundamental_matrix.
reconst3d_from_fundamental_matrix is the projective equivalent to the Euclidean reconstruction op-
erator intersect_lines_of_sight.
Parameters
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 1 (row coordinate).
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 1 (column coordinate).
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 2 (row coordinate).
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 2 (column coordinate).
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Covariance of the points in image 2.
Default: []
. CovCC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate variance of the points in image 2.
Default: []
. FMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Fundamental matrix.
. CovFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
Default: []
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinates of the reconstructed points in projective 3D space.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinates of the reconstructed points in projective 3D space.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinates of the reconstructed points in projective 3D space.
. W (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
W coordinates of the reconstructed points in projective 3D space.
. CovXYZW (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Covariance matrices of the reconstructed points.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON 24.11.1.0
296 CHAPTER 5 3D RECONSTRUCTION

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
Possible Predecessors
match_fundamental_matrix_ransac
Alternatives
vector_to_fundamental_matrix, intersect_lines_of_sight
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Module
3D Metrology

rel_pose_to_fundamental_matrix ( : : RelPose, CovRelPose,


CamPar1, CamPar2 : FMatrix, CovFMat )

Compute the fundamental matrix from the relative orientation of two cameras.
Cameras including lens distortions can be modeled by the following set of parameters: the focal length f , two
scaling factors sx , sy , the coordinates of the principal point (cx , cy ) and the distortion coefficient κ. For a more
detailed description see the chapter Calibration. Only cameras with a distortion coefficient equal to zero project
straight lines in the world onto straight lines in the image. This is also true for telecentric cameras and for cameras
with tilt lenses. rel_pose_to_fundamental_matrix handles telecentric lenses and tilt lenses correctly.
However, for reasons of simplicity, these lens types are ignored in the formulas below. If the distortion coefficient
is equal to zero, image projection is a linear mapping and the camera, i.e., the set of internal parameters, can be
described by the camera matrix CamM at:
 
f /sx 0 cx
CamM at =  0 f /sy cy  .
0 0 1

Going from a nonlinear model to a linear model is an approximation of the real underlying camera. For a variety of
camera lenses, especially lenses with long focal length, the error induced by this approximation can be neglected.
Following the formula E = ([t]× R)T , the essential matrix E is derived from the translation t and the rotation
R of the relative pose RelPose (see also operator vector_to_rel_pose). In the linearized framework the
fundamental matrix can be calculated from the relative pose and the camera matrices according to the formula
presented under essential_to_fundamental_matrix:

FMatrix = CamM at2−T · ([t]× R)T · CamM at1−1 .

The transformation from a relative pose to a fundamental matrix goes along with the propagation of the covariance
matrices CovRelPose to CovFMat. If CovRelPose is empty CovFMat will be empty too.
The conversion operator rel_pose_to_fundamental_matrix is used especially for a subsequent visual-
ization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameters

. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer


Relative orientation of the cameras (3D pose).
. CovRelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
6 × 6 covariance matrix of relative pose.
Default: []
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Parameters of the 1. camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Parameters of the 2. camera.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 297

. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real


Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
vector_to_rel_pose
Alternatives
essential_to_fundamental_matrix
See also
calibrate_cameras
Module
3D Metrology

vector_to_essential_matrix ( : : Rows1, Cols1, Rows2, Cols2,


CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2, CamMat1,
CamMat2, Method : EMatrix, CovEMat, Error, X, Y, Z, CovXYZ )

Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D
points.
For a stereo configuration with known camera matrices the geometric relation between the two images is de-
fined by the essential matrix. The operator vector_to_essential_matrix determines the essential matrix
EMatrix from in general at least six given point correspondences, that fulfill the epipolar constraint:
 T  
X2 X1
 Y2  · EM atrix ·  Y1  = 0
1 1

The operator vector_to_essential_matrix is designed to deal only with a linear camera model. This is
in contrast to the operator vector_to_rel_pose, that encompasses lens distortions too. The internal camera
parameters are passed by the arguments CamMat1 and CamMat2, which are 3 × 3 upper triangular matrices
describing an affine transformation. The relation between the vector (X,Y,1), defining the direction from the
camera to the viewed 3D point, and its (projective) 2D image coordinates (col,row,1) is:
     
col X f /sx s cx
 row  = CamM at ·  Y  where CamM at =  0 f /sy cy  .
1 1 0 0 1

The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor and (cx , cy ) indicates
the principal point. Mainly, these are the elements known from the camera parameters as used for example in
calibrate_cameras. Alternatively, the elements of the camera matrix can be described in a different way,
see e.g. stationary_camera_self_calibration.
The point correspondences (Rows1,Cols1) and (Rows2,Cols2) are typically found by applying the operator
match_essential_matrix_ransac. Multiplying the image coordinates by the inverse of the camera ma-
trices results in the 3D direction vectors, which can then be inserted in the epipolar constraint.
The parameter Method decides whether the relative orientation between the cameras is of a special type and which
algorithm is to be applied for its computation. If Method is either ’normalized_dlt’ or ’gold_standard’ the relative
orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’ means that the relative motion
between the cameras is a pure translation. The typical application for this special motion case is the scenario

HALCON 24.11.1.0
298 CHAPTER 5 3D RECONSTRUCTION

of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of
corresponding points is just two instead of six in the general case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result. Here, ’normal-
ized_dlt’ and ’gold_standard’ stand for direct-linear-transformation and gold-standard-algorithm respectively. All
methods return the coordinates (X,Y,Z) of the reconstructed 3D points. The optimal methods also return the co-
variances of the 3D points in CovXYZ. Let n be the number of points then the 3 × 3 covariance matrices are
concatenated and stored in a tuple of length 9n. Additionally, the optimal methods return the covariance of the
essential matrix CovEMat.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization process and is the root-mean-square Euclidean
distance in pixels between the points and their corresponding epipolar lines.
For the operator vector_to_essential_matrix a special configuration of scene points and cameras exists:
if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution
in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that all output parameters are of double length and the values of the second solution are
simply concatenated behind the values of the first one.
Parameters

. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer


Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 6 || length(Rows1) >= 2
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 2.
Default: []
. CovCC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 2.
Default: []
. CamMat1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer
Camera matrix of the 1st camera.
. CamMat2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer
Camera matrix of the 2nd camera.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 299

. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string


Algorithm for the computation of the essential matrix and for special camera orientations.
Default: ’normalized_dlt’
List of values: Method ∈ {’normalized_dlt’, ’gold_standard’, ’trans_normalized_dlt’,
’trans_gold_standard’}
. EMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Computed essential matrix.
. CovEMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the essential matrix.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Root-Mean-Square of the epipolar distance error.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinates of the reconstructed 3D points.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinates of the reconstructed 3D points.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinates of the reconstructed 3D points.
. CovXYZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Covariance matrices of the reconstructed 3D points.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
match_essential_matrix_ransac
Possible Successors
essential_to_fundamental_matrix
Alternatives
vector_to_rel_pose, vector_to_fundamental_matrix
See also
stationary_camera_self_calibration
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
J.Chris McGlone (editor): “Manual of Photogrammetry”; American Society for Photogrammetry and Remote
Sensing ; 2004.
Module
3D Metrology

vector_to_fundamental_matrix ( : : Rows1, Cols1, Rows2, Cols2,


CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2,
Method : FMatrix, CovFMat, Error, X, Y, Z, W, CovXYZW )

Compute the fundamental matrix given a set of image point correspondences and reconstruct 3D points.
For a stereo configuration with unknown camera parameters the geometric relation between the two images is
defined by the fundamental matrix. The operator vector_to_fundamental_matrix determines the fun-
damental matrix FMatrix from given point correspondences (Rows1,Cols1), (Rows2,Cols2), that fulfill the
epipolar constraint:
 T  
Cols2 Cols1
 Rows2  · FMatrix ·  Rows1  = 0 .
1 1

HALCON 24.11.1.0
300 CHAPTER 5 3D RECONSTRUCTION

Note the column/row ordering in the point coordinates: since the fundamental matrix encodes the projective re-
lation between two stereo images embedded in 3D space, the x/y notation must be compliant with the camera
coordinate system. Therefore, (x,y) coordinates correspond to (column,row) pairs.
For a general relative orientation of the two cameras the minimum number of required point correspondences is
eight. Then, Method is chosen to be ’normalized_dlt’ or ’gold_standard’. If left and right camera are identical and
the relative orientation between them is a pure translation then choose Method equal to ’trans_normalized_dlt’
or ’trans_gold_standard’. In this special case the minimum number of correspondences is only two. The typical
application of the motion being a pure translation is that of a single fixed camera looking onto a moving conveyor
belt.
The fundamental matrix is determined by minimizing a cost function. To minimize the respective error different
algorithms are available, and the user can choose between the direct-linear-transformation (’normalized_dlt’) and
the gold-standard-algorithm (’gold_standard’). Like the motion case, the algorithm can be selected with the pa-
rameter Method. For Method = ’normalized_dlt’ or ’trans_normalized_dlt’, a linear algorithm minimizes an
algebraic error based on the above epipolar constraint. This algorithm offers a good compromise between speed
and accuracy. For Method = ’gold_standard’ or ’trans_gold_standard’, a mathematically optimal, but slower op-
timization is used, which minimizes the geometric backprojection error of reconstructed projective 3D points. In
this case, in addition to the fundamental matrix its covariance matrix CovFMat is output, along with the projective
coordinates (X,Y,Z,W) of the reconstructed points and their covariances CovXYZW. Let n be the number of points.
Then the concatenated covariances are stored in a 16 × n tuple.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization procedure and is the mean Euclidean distance
in pixels between the points and their corresponding epipolar lines.
If the correspondence between the points are not known, match_fundamental_matrix_ransac should be
used instead.
Parameters
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 8 || length(Rows1) >= 2
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 2.
Default: []

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 301

. CovCC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer


Column coordinate variance of the points in image 2.
Default: []
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Estimation algorithm.
Default: ’normalized_dlt’
List of values: Method ∈ {’normalized_dlt’, ’gold_standard’, ’trans_normalized_dlt’,
’trans_gold_standard’}
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Computed fundamental matrix.
. CovFMat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Root-Mean-Square of the epipolar distance error.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinates of the reconstructed points in projective 3D space.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinates of the reconstructed points in projective 3D space.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinates of the reconstructed points in projective 3D space.
. W (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
W coordinates of the reconstructed points in projective 3D space.
. CovXYZW (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Covariance matrices of the reconstructed 3D points.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
match_fundamental_matrix_ransac
Possible Successors
gen_binocular_proj_rectification
Alternatives
vector_to_essential_matrix, vector_to_rel_pose
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

vector_to_fundamental_matrix_distortion ( : : Rows1, Cols1,


Rows2, Cols2, CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2,
ImageWidth, ImageHeight, Method : FMatrix, Kappa, Error, X, Y,
Z, W )

Compute the fundamental matrix and the radial distortion coefficient given a set of image point correspondences
and reconstruct 3D points.

HALCON 24.11.1.0
302 CHAPTER 5 3D RECONSTRUCTION

For a stereo configuration with unknown camera parameters, the geometric relation between the two images is de-
fined by the fundamental matrix. vector_to_fundamental_matrix_distortion determines the fun-
damental matrix FMatrix and the radial distortion coefficient Kappa (κ) from given point correspondences
(Rows1,Cols1), (Rows2,Cols2) that fulfill the epipolar constraint:
 T  
c2 c1
 r2  · FMatrix ·  r1  = 0 .
1 1

Here, (r1 , c1 ) and (r2 , c2 ) denote image points that are obtained by undistorting the input image points with the
division model (see Calibration):

r̃ c̃
r= c=
1 + κ(r̃2 + c̃2 ) 1 + κ(r̃2 + c̃2 )

Here, (r̃1 , c̃1 ) = (Rows1 − 0.5(ImageHeight − 1), Cols1 − 0.5(ImageWidth − 1))


and (r̃2 , c̃2 ) = (Rows2 − 0.5(ImageHeight − 1), Cols2 − 0.5(ImageWidth − 1))
denote the distorted image points, specified relative to the image center. Thus,
vector_to_fundamental_matrix_distortion assumes that the principal point of the camera,
i.e., the center of the radial distortions, lies at the center of the image.
The returned Kappa can be used to construct camera parameters that can be used to rectify images or
points (see change_radial_distortion_cam_par, change_radial_distortion_image, and
change_radial_distortion_points):

CamPar = [0 area_scan_telecentric_division 0 , 0.0, Kappa, 1.0, 1.0, 0.5(ImageWidth − 1),


0.5(ImageHeight − 1), ImageWidth, ImageHeight]

Note the column/row ordering in the point coordinates above: since the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation must be compliant with the camera
coordinate system. Therefore, (x,y) coordinates correspond to (column,row) pairs.
For a general relative orientation of the two cameras, the minimum number of required point correspondences
is nine. Then, Method must be set to ’linear’ or ’gold_standard’. If the left and right cameras are identi-
cal and the relative orientation between them is a pure translation, Method must be set to ’trans_linear’ or
’trans_gold_standard’. In this special case, the minimum number of correspondences is only four. The typical
application of the motion being a pure translation is a single fixed camera looking onto a moving conveyor belt.
The fundamental matrix is determined by minimizing a cost function. To minimize the respective error, different
algorithms are available, and the user can choose between the linear (’linear’) and the gold-standard algorithm
(’gold_standard’). Like the motion type, the algorithm can be selected with the parameter Method. For Method
= ’linear’ or ’trans_linear’, a linear algorithm that minimizes an algebraic error based on the above epipolar
constraint is used. This algorithm is very fast. For the pure translation case (Method = ’trans_linear’), the
linear method returns accurate results for small to moderate noise of the point coordinates and for most distortions
(except for very small distortions). For a general relative orientation of the two cameras (Method = ’linear’),
the linear method only returns accurate results for very small noise of the point coordinates and for sufficiently
large distortions. For Method = ’gold_standard’ or ’trans_gold_standard’, a mathematically optimal but slower
optimization is used, which minimizes the geometric reprojection error of reconstructed projective 3D points. In
this case, in addition to the fundamental matrix and the distortion coefficient, the projective coordinates (X,Y,Z,W)
of the reconstructed points are returned. For a general relative orientation of the two cameras, in general Method
= ’gold_standard’ should be selected.
If an optimal gold-standard algorithm is chosen, the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated into the computation. They can be provided, for example, by
the operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
passed. In this case, the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization procedure and is the mean symmetric Euclidean
distance in pixels between the points and their corresponding epipolar lines.

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 303

If the correspondence between the points is not known, match_fundamental_matrix_distortion_ransac


should be used instead.
Parameters
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 9 || length(Rows1) >= 4
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 2.
Default: []
. CovCC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 2.
Default: []
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the images from which the points were extracted.
Restriction: ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the images from which the points were extracted.
Restriction: ImageHeight > 0
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Estimation algorithm.
Default: ’gold_standard’
List of values: Method ∈ {’linear’, ’gold_standard’, ’trans_linear’, ’trans_gold_standard’}
. FMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Computed fundamental matrix.
. Kappa (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Computed radial distortion coefficient.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Root-Mean-Square epipolar distance error.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinates of the reconstructed points in projective 3D space.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinates of the reconstructed points in projective 3D space.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinates of the reconstructed points in projective 3D space.

HALCON 24.11.1.0
304 CHAPTER 5 3D RECONSTRUCTION

. W (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real


W coordinates of the reconstructed points in projective 3D space.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
match_fundamental_matrix_distortion_ransac
Possible Successors
change_radial_distortion_cam_par, change_radial_distortion_image,
change_radial_distortion_points, gen_binocular_proj_rectification
Alternatives
vector_to_fundamental_matrix, vector_to_essential_matrix, vector_to_rel_pose
See also
calibrate_cameras
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology

vector_to_rel_pose ( : : Rows1, Cols1, Rows2, Cols2, CovRR1,


CovRC1, CovCC1, CovRR2, CovRC2, CovCC2, CamPar1, CamPar2,
Method : RelPose, CovRelPose, Error, X, Y, Z, CovXYZ )

Compute the relative orientation between two cameras given image point correspondences and known camera
parameters and reconstruct 3D space points.
For a stereo configuration with known camera parameters the geometric relation between the two images is defined
by the relative pose. The operator vector_to_rel_pose computes the relative pose from in general at least
six point correspondences in the image pair. RelPose indicates the relative pose of camera 1 with respect to
camera 2 (see create_pose for more information about poses and their representations.). This is in accordance
with the explicit calibration of a stereo setup using the operator calibrate_cameras. Now, let R, t be the
rotation and translation of the relative pose. Then, the essential matrix E is defined as E = ([t]× R)T , where [t]×
denotes the 3 × 3 skew-symmetric matrix realizing the cross product with the vector t. The pose can be determined
from the epipolar constraint:
 T    
X2 X1 0 −tz ty
 Y2  · ([t]× R)T ·  Y1  = 0 where [t]× =  tz 0 −tx  .
1 1 −ty tx 0

Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that
the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation
vector will always be normalized to unit length. As a consequence, a three-dimensional reconstruction of the
scene, here in terms of points given by their coordinates (X,Y,Z), can be carried out only up to a single global
scaling factor. If the absolute 3D coordinates of the reconstruction are to be achieved the unknown scaling factor
can be computed from a gauge, which has to be visible in both images. For example, a simple gauge can be given
by any known distance between points in the scene.
The operator vector_to_rel_pose is designed to deal with a camera model that includes lens distortions.
This is in contrast to the operator vector_to_essential_matrix, which encompasses only straight line

HALCON/HDevelop Reference Manual, 2024-11-13


5.1. BINOCULAR STEREO 305

preserving cameras. The camera parameters are passed by the arguments CamPar1, CamPar2. The 3D
direction vectors (X1 , Y1 , 1) and (X2 , Y2 , 1) are calculated from the point coordinates (Rows1,Cols1) and
(Rows2,Cols2) by inverting the process of projection (see Calibration). The point correspondences are typi-
cally determined by applying the operator match_rel_pose_ransac.
The parameter Method decides whether the relative orientation between the cameras is of a special type and which
algorithm is to be applied for its computation. If Method is either ’normalized_dlt’ or ’gold_standard’ the relative
orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’ means that the relative motion
between the cameras is a pure translation. The typical application for this special motion case is the scenario
of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of
corresponding points is just two instead of six in the general case.
The relative pose is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen. With
’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result. Here, ’normalized_dlt’
and ’gold_standard’ stand for direct-linear-transformation and gold-standard-algorithm respectively. All methods
return the coordinates (X,Y,Z) of the reconstructed 3D points. The optimal methods also return the covariances of
the 3D points in CovXYZ. Let n be the number of points then the 3 × 3 covariance matrices are concatenated and
stored in a tuple of length 9n. Additionally, the optimal methods return the 6 × 6 covariance matrix of the pose
CovRelPose.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization process and is the root-mean-square Euclidean
distance in pixels between the points and their corresponding epipolar lines.
For the operator vector_to_rel_pose a special configuration of scene points and cameras exists: if all 3D
points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the relative
pose is not unique but twofold. As a consequence both solutions are computed and returned by the operator. This
means that all output parameters are of double length and the values of the second solution are simply concatenated
behind the values of the first one.
Parameters
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 6 || length(Rows1) >= 2
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 2.
Default: []

HALCON 24.11.1.0
306 CHAPTER 5 3D RECONSTRUCTION

. CovCC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer


Column coordinate variance of the points in image 2.
Default: []
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Camera parameters of the 1st camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Camera parameters of the 2nd camera.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Algorithm for the computation of the relative pose and for special pose types.
Default: ’normalized_dlt’
List of values: Method ∈ {’normalized_dlt’, ’gold_standard’, ’trans_normalized_dlt’,
’trans_gold_standard’}
. RelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Computed relative orientation of the cameras (3D pose).
. CovRelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
6 × 6 covariance matrix of the relative camera orientation.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Root-Mean-Square of the epipolar distance error.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinates of the reconstructed 3D points.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinates of the reconstructed 3D points.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinates of the reconstructed 3D points.
. CovXYZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Covariance matrices of the reconstructed 3D points.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
match_rel_pose_ransac
Possible Successors
gen_binocular_rectification_map, rel_pose_to_fundamental_matrix
Alternatives
vector_to_essential_matrix, vector_to_fundamental_matrix,
binocular_calibration
See also
camera_calibration
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
J.Chris McGlone (editor): “Manual of Photogrammetry”; American Society for Photogrammetry and Remote
Sensing ; 2004.
Module
3D Metrology

5.2 Depth From Focus

depth_from_focus ( MultiFocusImage : Depth, Confidence : Filter,


Selection : )

Extract depth using multiple focus levels.

HALCON/HDevelop Reference Manual, 2024-11-13


5.2. DEPTH FROM FOCUS 307

The operator depth_from_focus extracts the depth using a focus sequence. The images of the focus sequence
have to be passed as a multi channel image (MultiFocusImage). The depth for each pixel will be returned in
Depth as the channel number. The parameter Confidence returns a confidence value for each depth estimation:
The larger this value, the higher the confidence of the depth estimation is.
depth_from_focus selects the pixels with the best focus of all focus levels. The method used to extract these
pixels is specified by the parameters Filter and Selection.
For the parameter Filter, you can choose between the values ’highpass’ and ’bandpass’. To determine the focus
within the image a high- or a bandpass filter can be applied. The larger the filter response, the more in focus is
the image at this location. Compared to the highpass filter, the bandpass filter suppresses high frequencies. This is
useful in particular in images containing strong noise.
Optionally, you can smooth the filtered image using the mean filter by passing two additional integer values for
the mask size in the parameter Filter (e.g., [’highpass’, 7, 7]). This blurs the in-focus region with neighboring
pixels and thus allows to bridge small areas with no texture within the image. Note, however, that this smoothing
does not suppress noise in the original image, since it is applied only after high- or bandpass filtering.
The parameter Selection determines how the optimum focus level is selected. If you pass the value
’next_maximum’, the closest focus maximum in the neighborhood is used. In contrast, if you pass the value ’local’,
the focus level is determined based on the focus values of all focus levels of the pixel. With ’next_maximum’, you
typically achieve a slightly smoothed and more robust result.
This additional smoothing is useful if no telecentric lenses are used to take the input images. In this case, the
position of an object will slightly shift within the sequence. By adding appropriate smoothing, this effect can be
partially compensated.
Attention
If MultiFocusImage contains more than 255 channels (focus levels), Depth is clipped at 255, i.e. depth
values higher than 255 are ignored.
If the filter mask for Filter is specified with even values, the routine uses the next larger odd values instead (this
way the center of the filter mask is always explicitly determined).
If Selection is set to ’local’ and Filter is set to ’highpass’ or ’bandpass’, depth_from_focus can be
executed on OpenCL devices. If smoothing is enabled, the same restrictions and limitations as for mean_image
apply.
Note that filter operators may return unexpected results if an image with a reduced domain is used as input. Please
refer to the chapter Filters.
Parameters
. MultiFocusImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; object : byte
Multichannel gray image consisting of multiple focus levels.
. Depth (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Depth image.
. Confidence (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Confidence of depth estimation.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer
Filter used to find sharp pixels.
Default: ’highpass’
Suggested values: Filter ∈ {’highpass’, ’bandpass’, 3, 5, 7, 9}
. Selection (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Method used to find sharp pixels.
Default: ’next_maximum’
List of values: Selection ∈ {’next_maximum’, ’local’}
Example

compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,'highpass','next_maximum');
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
threshold(Confidence,HighConfidence,10,255);
reduce_domain(SharpImage,HighConfidence,ConfidentSharp);

HALCON 24.11.1.0
308 CHAPTER 5 3D RECONSTRUCTION

Execution Information

• Supports OpenCL compute devices.


• Multithreading type: reentrant (runs in parallel with non-exclusive operators).
• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
• Automatically parallelized on internal data level.
Possible Predecessors
compose2, compose3, compose4, add_channels, read_image, read_sequence
Possible Successors
select_grayvalues_from_channels, mean_image, binomial_filter, gauss_filter,
threshold
See also
count_channels
Module
3D Metrology

select_grayvalues_from_channels ( MultichannelImage,
IndexImage : Selected : : )

Selection of gray values of a multi-channel image using an index image.


The operator select_grayvalues_from_channels selects gray values from the different channels of
MultichannelImage. The channel number for each pixel is determined from the corresponding pixel value in
IndexImage. Note, IndexImage is allowed to have an arbitrary number of channels for reasons of backward
compatibility, but only the first channel is considered.
Parameters
. MultichannelImage (input_object) . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte
Multi-channel gray value image.
. IndexImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image, where pixel values are interpreted as channel index.
Number of elements: IndexImage == MultichannelImage || IndexImage == 1
. Selected (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Resulting image.
Example

compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,'highpass','next_maximum');
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
• Automatically parallelized on domain level.

Possible Predecessors
depth_from_focus, mean_image

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 309

Possible Successors
disp_image
See also
count_channels
Module
Foundation

5.3 Multi-View Stereo

This chapter contains operators for multi-view 3D reconstruction.


Concept of Multi-view 3D Reconstruction
With multi-view 3D reconstruction, it is possible to generate 3D objects using 2D images from multiple cameras.
It is possible to reconstruct the complete 3D surface of an object, or single 3D points.
In the following, the steps that are required to reconstruct surfaces and points are described briefly. Note that a
well-calibrated camera setup is the main requirement for a precise 3D reconstruction; see Calibration for more
details. Additionally, in the HDevelop example reconstruct_surface_mixed_camera_types.hdev,
a typical calibration workflow (from the calibration data model via the camera setup model to the stereo model) is
performed.

Generate stereo model: First, create the stereo model using

• create_stereo_model.

If you want to reconstruct 3D points, choose the Method ’points_3d’.

3D point reconstruction with ’points_3d’.

For the reconstruction of surfaces, the methods ’surface_pairwise’ and ’surface_fusion’ are avail-
able. For detailed information on these two methods, have a look at the reference manual entry of
reconstruct_surface_stereo.

(1) (2) (3)


These three 2D images are used for the surface reconstruction as seen in the images below.

HALCON 24.11.1.0
310 CHAPTER 5 3D RECONSTRUCTION

(1) (2) (3)


(1) Surface reconstruction with ’surface_pairwise’. (2) Surface reconstruction with ’surface_fusion’. (3) Surface
reconstruction with ’surface_fusion’, where the color information is extracted from the used 2D images. Have
a look at the HDevelop example reconstruct_surface_mixed_camera_types.hdev to see the 3D
reconstruction process.

Set the image pairs (only for surface reconstruction): For the reconstruction of 3D surfaces, multiple binocular
stereo reconstructions are performed, and then combined. For the binocular reconstruction, image pairs
have to be specified. For example, for the three images shown above, the image pairs might be [0,1] and
[1,2]. The image pairs have to be specified using

• set_stereo_model_image_pairs,

and query the image pairs with

• get_stereo_model_image_pairs.

For more information, see reconstruct_surface_stereo as well as the above-mentioned operators.


Modify the stereo model parameters: With

• set_stereo_model_param,

you can optimize the settings of the 3D reconstruction for your setup.
When reconstructing surfaces, it is highly recommended to limit the 3D reconstruction using a bounding box
which is as tight as possible around the object that is to be reconstructed.

The bounding box, which is set with set_stereo_model_param, restricts the area where the object is
reconstructed, and thus can be used to reduce the runtime greatly.

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 311

When using the ’surface_fusion’ Method in create_stereo_model, it is recommended to first opti-


mize the parameters of the ’surface_pairwise’ Method, since it is used as a basis. For more details on the
parameters, see the examples reconstruct_surface_stereo_pairwise_workflow.hdev and
reconstruct_surface_stereo_fusion_workflow.hdev.
You can query the set parameters with

• get_stereo_model_param.

Perform the 3D reconstruction: Then, to perform the actual reconstruction, use

• reconstruct_points_stereo or
• reconstruct_surface_stereo.

Get intermediate results (only for surface reconstruction): Note that to query these intermediate results, you
must enable the ’persistence’ mode for the stereo model with set_stereo_model_param before per-
forming the reconstruction.
With

• get_stereo_model_object,

you can access and inspect intermediate results of a surface reconstruction performed with
reconstruct_surface_stereo. These images can be used for troubleshooting the reconstruction
process.
With

• get_stereo_model_object_model_3d,

you can get the 3D object model that was reconstructed with reconstruct_surface_stereo as an
intermediate result using the Method ’surface_fusion’.

clear_stereo_model ( : : StereoModelID : )

Free the memory of a stereo model.


The operator clear_stereo_model frees the memory of the stereo model StereoModelID that was created
by create_stereo_model. After calling clear_stereo_model, the model can no longer be used. The
handle StereoModelID becomes invalid.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• StereoModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Module
3D Metrology

HALCON 24.11.1.0
312 CHAPTER 5 3D RECONSTRUCTION

create_stereo_model ( : : CameraSetupModelID, Method,


GenParamName, GenParamValue : StereoModelID )

Create a HALCON stereo model.


The operator create_stereo_model creates a HALCON stereo model and returns a handle to it in
StereoModelID. The model provides functionality for reconstructing either 3D points or surfaces from a cal-
ibrated multi-view stereo camera setup specified in CameraSetupModelID (refer to Calibration / Multi-View
for further details on calibration of multiple cameras).
For Method=’points_3d’, a stereo model is created that, after being configured, can be passed to
reconstruct_points_stereo. The latter reconstructs 3D points by intersecting lines of sight from point
correspondences, extracted from multiple calibrated images (see reconstruct_points_stereo for more
details).
For Method=’surface_pairwise’ or Method=’surface_fusion’, a stereo model is created that, after being config-
ured, can be passed to reconstruct_surface_stereo. The latter obtains disparity images from preselected
image pairs in a calibrated multi-view stereo setup and fuses the collected 3D information in a single surface re-
construction (see reconstruct_surface_stereo for more details).
The parameters GenParamName and GenParamValue can be used to set general model parameters. Alterna-
tively, these parameters can be modified with the operator set_stereo_model_param before the correspond-
ing reconstruction operator is called (see set_stereo_model_param for more details on the available model
parameters).
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Reconstruction method.
Default: ’surface_pairwise’
List of values: Method ∈ {’surface_pairwise’, ’surface_fusion’, ’points_3d’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of the model parameter to be set.
Default: []
List of values: GenParamName ∈ {’bounding_box’, ’persistence’, ’sub_sampling_step’,
’rectif_interpolation’, ’rectif_method’, ’disparity_method’, ’binocular_method’, ’binocular_num_levels’,
’binocular_mask_width’, ’binocular_mask_height’, ’binocular_texture_thresh’, ’binocular_score_thresh’,
’binocular_filter’, ’binocular_sub_disparity’, ’binocular_mg_gray_constancy’,
’binocular_mg_gradient_constancy’, ’binocular_mg_smoothness’, ’binocular_mg_initial_guess’,
’binocular_mg_default_parameters’, ’binocular_mg_solver’, ’binocular_mg_cycle_type’,
’binocular_mg_pre_relax’, ’binocular_mg_post_relax’, ’binocular_mg_initial_level’,
’binocular_mg_iterations’, ’binocular_mg_pyramid_factor’, ’binocular_ms_surface_smoothing’,
’binocular_ms_edge_smoothing’, ’binocular_ms_consistency_check’, ’binocular_ms_similarity_measure’,
’binocular_ms_sub_disparity’, ’point_meshing’, ’resolution’, ’surface_tolerance’, ’min_thickness’,
’smoothing’, ’color’, ’color_invisible’, ’min_disparity’, ’max_disparity’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Value of the model parameter to be set.
Default: []
Suggested values: GenParamValue ∈ {-1, -2, -5, 0, 0.3, 0.5, 0.9, 1, 2, 3, ’census_dense’, ’census_sparse’,
’binocular’, ’ncc’, ’none’, ’sad’, ’ssd’, ’bilinear’, ’viewing_direction’, ’geometric’, ’false’, ’very_accurate’,
’accurate’, ’fast_accurate’, ’fast’, ’v’, ’w’, ’none’, ’gauss_seidel’, ’multigrid’, ’true’, ’poisson’, ’isosurface’,
’interpolation’, ’left_right_check’, ’full_multigrid’, ’binocular_mg’, ’binocular_ms’, ’smallest_distance’,
’mean_by_distance’, ’line_of_sight’, ’mean_by_line_of_sight’, ’median’}
. StereoModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 313

• Processed without parallelization.


This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
set_stereo_model_param, set_stereo_model_image_pairs,
reconstruct_surface_stereo, reconstruct_points_stereo
Module
3D Metrology

get_stereo_model_image_pairs ( : : StereoModelID : From, To )

Return the list of image pairs set in a stereo model.


The operator get_stereo_model_image_pairs returns the list of image pairs for the stereo model
StereoModelID. The camera indices of the from and to cameras in the pairs are returned in the parameters
From and To, respectively (the terms "from" and "to" signal that during reconstruction the disparity "from" one
image "to" the other image of the pair is computed). The indices identify cameras from the camera setup model
assigned to the stereo model (see create_stereo_model).
The list of image pairs can be set with the operator set_stereo_model_image_pairs.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. From (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Camera indices for the from cameras in the image pairs.
. To (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Camera indices for the to cameras in the image pairs.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
set_stereo_model_image_pairs
Possible Successors
reconstruct_surface_stereo
See also
set_stereo_model_image_pairs
Module
3D Metrology

get_stereo_model_object ( : Object : StereoModelID, PairIndex,


ObjectName : )

Get intermediate iconic results of a stereo reconstruction.


With the operator get_stereo_model_object you can access and inspect intermediate iconic re-
sults of a surface reconstruction performed with reconstruct_surface_stereo for the stereo model
StereoModelID. In particular, this is useful for troubleshooting the reconstruction process. Note
that to collect the iconic results, you must enable the ’persistence’ mode for the stereo model (see
set_stereo_model_param) before performing the reconstruction. Iconic results are then associated with
each image pair that was processed during the reconstruction (see get_stereo_model_image_pairs).

HALCON 24.11.1.0
314 CHAPTER 5 3D RECONSTRUCTION

You select the image pair of interest by specifying the corresponding camera indices [From, To] in
PairIndex. By setting one of the following values in ObjectName, the corresponding iconic objects are
then returned in Object:

’from_image_rect’, ’to_image_rect’: Rectified image corresponding to the from and to camera, respectively. Both
images can be used to inspect the quality of the internal binocular stereo image rectification.
’disparity_image’: Disparity image for this pair. The quality of the disparity image has a direct impact on the final
surface reconstruction.
’score_image’: Score image assigned to the disparity image for this pair.

A mismatch between the rectified images, i.e., features appearing in different rows in the two im-
ages, or errors in the disparity or the score image have direct impact on the quality of the fi-
nal surface reconstruction. Therefore, we recommend to correct any detected imperfections by adjust-
ing the stereo model parameters (see set_stereo_model_param), in particular those which con-
trol the internal usage of gen_binocular_rectification_map and binocular_disparity (see
set_stereo_model_image_pairs and reconstruct_surface_stereo for further details).
Parameters
. Object (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; object
Iconic result.
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. PairIndex (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / string / real
Camera indices of the pair ([From, To]).
Suggested values: PairIndex ∈ {0, 1, 2}
. ObjectName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the iconic result to be returned.
Suggested values: ObjectName ∈ {’from_image_rect’, ’to_image_rect’, ’disparity_image’, ’score_image’}
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
reconstruct_surface_stereo
Module
3D Metrology

get_stereo_model_object_model_3d ( : : StereoModelID,
GenParamName : ObjectModel3D )

Get intermediate 3D object model of a stereo reconstruction


With the operator get_stereo_model_object_model_3d it is possible to get a 3D object model
ObjectModel3D that was reconstructed with reconstruct_surface_stereo as an intermedi-
ate result using the method ’surface_fusion’. The returned object model is equal to the result of
reconstruct_surface_stereo using method ’surface_pairwise’.
For this, a call to get_stereo_model_object_model_3d has to be performed using the value
’m3d_pairwise’ for the parameter GenParamName. It should be noted that the model can only be queried if
the ’persistence’ mode for the stereo model (see set_stereo_model_param) is enabled before performing
the reconstruction. Furthermore the object model can only be queried if the stereo model has been created using
the method ’surface_fusion’. Otherwise, an error is returned. If no object model has been created, the operator
returns -1.

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 315

Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the model parameters.
List of values: GenParamName ∈ {’m3d_pairwise’}
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Values of the model parameters.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
reconstruct_surface_stereo, set_stereo_model_param
See also
set_stereo_model_param
Module
3D Metrology

get_stereo_model_param ( : : StereoModelID,
GenParamName : GenParamValue )

Get stereo model parameters.


The operator get_stereo_model_param can be used to inspect diverse parameters of the stereo model
StereoModelID by specifying their names in GenParamName and getting their values in GenParamValue.
Two types of parameters can be inspected with this operator - general and specific for surface reconstruction. Note
that no specific parameters are provided for 3D point stereo reconstruction.
All parameters that can be set with set_stereo_model_param can also be queried with
get_stereo_model_param - for a description see the former operator. In contrast, the following
parameters are set by other operators and cannot be modified afterwards.
General parameters

’type’: Type of the stereo model (currently either ’surface_pairwise’, ’surface_fusion’ or ’points_3d’).
’camera_setup_model’: Handle to a copy of the camera setup model set in the stereo model. Changing properties
of the copy does not affect the camera setup model stored in the stereo model.
’from_cam_param_rect N’, ’to_cam_param_rect N’: Camera parameters of the rectified from- and to-cameras of
camera pair N. See set_stereo_model_image_pairs for more information about camera pairs.
’from_cam_pose_rect N’, ’to_cam_pose_rect N’: Point transformation from the rectified from- and to-cameras of
camera pair N to the respective unrectified camera. See set_stereo_model_image_pairs for more
information about camera pairs.
’rel_pose_rect N’: Point transformation from the rectified to-camera to the rectified from-camera. See
set_stereo_model_image_pairs for more information about camera pairs.

The parameters ’type’ and ’camera_setup_model’ are set when creating the stereo model with
create_stereo_model. For ’from_cam_param_rect N’, ’to_cam_param_rect N’, ’from_cam_pose_rect N’,
’to_cam_pose_rect N’, and ’rel_pose_rect N’, note that these parameters are only available after setting the image
pairs (see set_stereo_model_image_pairs).
A note on tuple-valued model parameters

HALCON 24.11.1.0
316 CHAPTER 5 3D RECONSTRUCTION

Most of the stereo model parameters are single-valued. Thus, you can provide a list (i.e., tuple) of parameter names
and get a list (tuple) of values that has the same length as the output tuple. In contrast, when querying a tuple-valued
parameter, a tuple of values is returned. When querying such a parameter together with other parameters, the value-
to-parameter-name correspondence is not obvious anymore. Thus, tuple-valued parameters like ’bounding_box’,
’min_disparity’ or ’max_disparity’ should always be queried in a separate call to get_stereo_model_param.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the parameters to be set.
List of values: GenParamName ∈ {’type’, ’camera_setup_model’, ’bounding_box’, ’persistence’,
’sub_sampling_step’, ’rectif_interpolation’, ’rectif_sub_sampling’, ’rectif_method’, ’disparity_method’,
’binocular_method’, ’binocular_num_levels’, ’binocular_mask_width’, ’binocular_mask_height’,
’binocular_texture_thresh’, ’binocular_score_thresh’, ’binocular_filter’, ’binocular_sub_disparity’,
’binocular_mg_gray_constancy’, ’binocular_mg_gradient_constancy’, ’binocular_mg_smoothness’,
’binocular_mg_initial_guess’, ’binocular_mg_solver’, ’binocular_mg_cycle_type’,
’binocular_mg_pre_relax’, ’binocular_mg_post_relax’, ’binocular_mg_initial_level’,
’binocular_mg_iterations’, ’binocular_mg_pyramid_factor’, ’binocular_ms_surface_smoothing’,
’binocular_ms_edge_smoothing’, ’binocular_ms_consistency_check’, ’binocular_ms_similarity_measure’,
’binocular_ms_sub_disparity’, ’min_disparity’, ’max_disparity’, ’point_meshing’, ’poisson_depth’,
’poisson_solver_divide’, ’poisson_samples_per_node’, ’resolution’, ’surface_tolerance’, ’min_thickness’,
’smoothing’, ’color’, ’color_invisible’, ’from_cam_param_rect’, ’to_cam_param_rect’,
’from_cam_pose_rect’, ’to_cam_pose_rect’, ’rel_pose_rect’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer / string
Values of the parameters to be set.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_stereo_model, set_stereo_model_param
Possible Successors
reconstruct_surface_stereo, reconstruct_points_stereo
See also
set_stereo_model_param
Module
3D Metrology

reconstruct_points_stereo ( : : StereoModelID, Row, Column,


CovIP, CameraIdx, PointIdx : X, Y, Z, CovWP, PointIdxOut )

Reconstruct 3D points from calibrated multi-view stereo images.


The operator reconstruct_points_stereo reconstructs 3D points from point correspondences found in
the images of a calibrated multi-view stereo setup. The calibration information for the images is provided in
the camera setup model that is associated with the stereo model StereoModelID during its creation (see
create_stereo_model). Note that the stereo model type must be ’points_3d’, otherwise the operator will
return an error.
The point correspondences must be passed in the parameters Row, Column, CameraIdx, and PointIdx in
form of tuples of the same length. Each set (Row[I],Column[I],CameraIdx[I],PointIdx[I]) rep-
resents the image coordinates (Row, Column) of the 3D point (PointIdx) in the image of a certain camera
(CameraIdx).

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 317

The reconstructed 3D point coordinates are returned in the tuples X, Y, and Z, relative to the coordinate system
of the camera setup model (see create_camera_setup_model). The tuple PointIdxOut contains the
corresponding point indices.
The reconstruction algorithm works as follows: First, it identifies point correspondences for a given 3D point
by collecting all sets with the same PointIdx. Then, it uses the Row, Column, and CameraIdx informa-
tion from the collected sets to project lines of sight from each camera through the corresponding image point
[Row,Column]. If there are at least 2 lines of sight for the point PointIdx, they are intersected and the result is
stored as the set (X[J],Y[J],Z[J],PointIdxOut[J]). The intersection is performed with a least-squares
algorithm, without taking into account potentially invalid lines of sight (e.g., if an image point was falsely specified
as corresponding to a certain 3D point).
To compute the covariance matrices for the reconstructed 3D points, statistical information about the extracted
image coordinates, i.e., the covariance matrices of the image points (see , e.g., points_foerstner) are needed
as input and must be passed in the parameter CovIP. Otherwise, if no covariance matrices for the 3D points are
needed or no covariance matrices for the image points are available, an empty tuple can be passed in CovIP. Then
no covariance matrix for the reconstructed 3D points is computed.
The covariance matrix of an image point is:

(sigma_r)2
 
sigma_rc
CovIP =
sigma_rc (sigma_c)2

The covariance matrices are symmetric 2x2 matrices, whose entries in the main diagonal represent the variances
of the image point in row-direction and column-direction, respectively. For each image point, a covariance matrix
must be passed in CovIP in form of a tuple with 4 elements:

[(sigma_r)2 , sigma_rc, sigma_rc, (sigma_c)2 ].

Thus, |CovIP|=4*|Row| and CovIP[I*4:I*4+3] is the covariance matrix for the I-th image point.
The computed covariance matrix for a successfully reconstructed 3D point is represented by a symmetric 3x3
matrix:

(sigma_x)2
 
sigma_xy sigma_xz
CovWP =  sigma_yx (sigma_y)2 sigma_yz 
sigma_zx sigma_zy (sigma_z)2

The diagonal entries represent the variances of the reconstructed 3D point in x-, y-, and z-direction. The computed
matrices are returned in the parameter CovWP in form of tuples each with 9 elements:

[(sigma_x)2 , sigma_xy, sigma_xz, sigma_yx, (sigma_y)2 , sigma_yz, sigma_zx, sigma_zy, (sigma_z)2 ].

Thus, |CovWP|=9*|X| and CovWP[J*9:J*9+8] is the covariance matrix for the J-th 3D point. Note that
if the camera setup associated with the stereo model contains the covariance matrices for the camera parameters,
these covariance matrices are considered in the computation of CovWP too.
If the stereo model has a valid bounding box set (see set_stereo_model_param), the resulting points are
clipped to this bounding box, i.e., points outside it are not returned. If the bounding box associated with the stereo
model is invalid, it is ignored and all points that could be reconstructed are returned.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinates of the detected points.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinates of the detected points.
. CovIP (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance matrices of the detected points.
Default: []

HALCON 24.11.1.0
318 CHAPTER 5 3D RECONSTRUCTION

. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer


Indices of the observing cameras.
Suggested values: CameraIdx ∈ {0, 1, 2}
. PointIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Indices of the observed world points.
Suggested values: PointIdx ∈ {0, 1, 2}
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinates of the reconstructed 3D points.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Y coordinates of the reconstructed 3D points.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Z coordinates of the reconstructed 3D points.
. CovWP (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Covariance matrices of the reconstructed 3D points.
. PointIdxOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Indices of the reconstructed 3D points.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Alternatives
reconstruct_surface_stereo, intersect_lines_of_sight
Module
3D Metrology

reconstruct_surface_stereo (
Images : : StereoModelID : ObjectModel3D )

Reconstruct surface from calibrated multi-view stereo images.


The operator reconstruct_surface_stereo reconstructs a surface from multiple Images, acquired with
a calibrated multi-view setup associated with a stereo model StereoModelID. The reconstructed surface is
stored in the handle ObjectModel3D.
Preparation and requirements
A summary of the preparation of a stereo model for surface reconstruction:

1. Obtain calibrated camera setup model (use calibrate_cameras or


create_camera_setup_model) and configure it.
2. Create a stereo model with create_stereo_model by selecting Method=’surface_pairwise’ or ’sur-
face_fusion’ (see ’Reconstruction algorithm’).
3. Configure the rectification parameters with set_stereo_model_param and afterwards set the image
pairs with set_stereo_model_image_pairs.
4. Configure the bounding box for the system with set_stereo_model_param
(GenParamName=’bounding_box’).
5. Configure parameters of pairwise reconstruction with set_stereo_model_param.
6. For models with Method=’surface_fusion’ configure parameters of the fusion algorithm with
set_stereo_model_param.
7. Acquire images with the calibrated cameras setup and collect them in an image array Images.
8. Perform surface reconstruction with reconstruct_surface_stereo.
9. Query and analyze intermediate results with get_stereo_model_object and
get_stereo_model_object_model_3d.

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 319

10. Readjust the parameters of the stereo model to improve the results with respect to quality and runtime with
set_stereo_model_param.

A camera setup model is associated with the stereo model StereoModelID upon its creation with
create_stereo_model. The camera setup must contain calibrated information about the cameras, with which
the images in the image array Images were acquired: the I-th image from the array corresponds to the camera
with index I-1 from the camera setup; the number of images in the array must be the same as the number of
cameras in the camera setup. The Images must represent a static scene or they must be taken simultaneously,
otherwise, the reconstruction of the surface might be impossible.
A well-calibrated camera setup is the main requirement for a precise surface reconstruction. Therefore, special
attention should be paid to obtaining a precise calibration of the cameras in the multi-view stereo setup used.
HALCON provides calibration of a multi-view setup with the operator calibrate_cameras. The resulting
calibrated camera setup can be accessed with a successive call to get_calib_data. Alternatively, for camera
setups with known parameters a calibrated camera setup can be created with create_camera_setup_model.
The proper selection of image pairs (see set_stereo_model_image_pairs) has an important role for the
general quality of the surface reconstruction. On the one hand, camera pairs with a small base line (small distance
between the camera centers) are better suited for the binocular stereo disparity algorithms. On the other hand,
in order to derive more accurate depth information of the scene, pairs with a long base line should be preferred.
Camera pairs should provide different points of view, such that if one pair does not see a certain area of the
surface, it is covered by another pair. Please note that the number of pairs linearly affects the runtime of the
pairwise reconstruction. Therefore, use "as many as needed and just as few as possible" image pairs in order to
handle the trade-off between completeness of the surface reconstruction and reconstruction runtime.
A bounding box is associated with the stereo model StereoModelID. For the surface stereo reconstruction,
it is required that the bounding box is valid (see set_stereo_model_param for further details). The recon-
struction algorithm needs the bounding box for three reasons:

• First, if MinDisparity and MaxDisparity were not set manually using the operators
create_stereo_model or set_stereo_model_param, it uses the projection of the bound-
ing box into both images of each image pair in order to estimate the values for MinDisparity
and MaxDisparity, which in turn are used in the internal call to binocular_disparity and
binocular_disparity_ms. In the case of using binocular_disparity_mg as disparity method,
suitable values for the parameters InitialGuess and ’initial_level’ are derived from the above-mentioned
parameters. However, the automatic estimation for this method is only used if called with default values for
the two parameters. Otherwise, the values as set by the user with set_stereo_model_param are used.
• Secondly, the default parameters for the fusion of pairwise reconstructions are calculated based on the bound-
ing box. They are reset in case the bounding box is changed. The bounding box should be tight around the
volume of interest. Else, the runtime will increase unnecessarily and drastically.
• Thirdly, the surface fragments lying outside the bounding box are clipped and are not re-
turned in ObjectModel3D. A too large bounding box results in a large difference be-
tween MinDisparity and MaxDisparity and this usually slows down the execution of
binocular_disparity, binocular_disparity_ms or binocular_disparity_mg and
therefore reconstruct_surface_stereo. A too small bounding box might result in clipping valid
surface areas.

Note that the method ’surface_fusion’ will try to produce a closed surface. If the object is only observed and
reconstructed from one side, the far end of the bounding box usually determines where the object is cut off.
Setting parameters of pairwise reconstruction before setting parameters of fusion is essential since the pair-
wise reconstruction of the object is input for the fusion algorithm. For a description of parameters, see
set_stereo_model_param. The choice of ’disparity_method’ has a major influence. The objects in the
scene should expose certain surface properties in order to make the scene suitable for the dense surface reconstruc-
tion. First, the surface reflectance should exhibit Lambertian properties as closely as possible (i.e., light falling on
the surface is scattered such that its apparent brightness is the same regardless of the angle of view). Secondly, the
surface should exhibit enough texture, but no repeating patterns.
get_stereo_model_object can be used to view intermediate results, in particular rectified, disparity and
score images. get_stereo_model_object_model_3d can be used to view the result of pairwise recon-
struction for models with Method=’surface_fusion’. See the paragraph "Troubleshooting for the configuration of
a stereo model" on how to use the obtained results.

HALCON 24.11.1.0
320 CHAPTER 5 3D RECONSTRUCTION

Reconstruction algorithm
The operator reconstruct_surface_stereo performs multiple binocular stereo reconstructions and sub-
sequently combines the results. The image pairs of this pairwise reconstruction are specified in StereoModelID
as pairs of cameras of an associated calibrated multi-view setup.
For each image pair, the images are rectified before internally one of the operators binocular_disparity,
binocular_disparity_mg or binocular_disparity_ms is called. The disparity informa-
tion is then converted to points in the coordinate system of the from-camera by an internal call of
disparity_image_to_xyz. In the next step, the points are transformed into the common coordinate sys-
tem that is specified in the camera setup model associated with StereoModelID and stored in a common point
cloud together with the points extracted from other image pairs.

’surface_pairwise’ If the stereo model is of type ’surface_pairwise’ (compare create_stereo_model),


the point cloud obtained as described above is directly returned in ObjectModel3D. For each point,
the normal vector is calculated by fitting a plane through the neighboring 3D points. In contrast to
surface_normals_object_model_3d, the neighboring points are not determined in 3D but sim-
ply in 2D by using the neighboring points in the X, Y, and Z images. The normal vector of each 3D point
is then set to the normal vector of the respective plane. Additionally, the score of the calculated disparity
is attached to every reconstructed 3D point and stored as an extended attribute. Furthermore, transformed
coordinate images can be sub-sampled. If only one image pair is processed and no point meshing is enabled,
reconstruct_surface_stereo stores a ’xyz_mapping’ attribute in ObjectModel3D, which
reveals the mapping of the reconstructed 3D points to coordinates of the first image of the pair. This at-
tribute is required by operators like segment_object_model_3d or object_model_3d_to_xyz
(with Type=’from_xyz_map’). In contrast to the single pair case, if two or more image pairs are
processed, reconstruct_surface_stereo does not store the ’xyz_mapping’ attribute since
single reconstructed points would originate from different image pairs. The presence of the at-
tribute in the output object model can be verified by calling get_object_model_3d_params with
GenParamName=’has_xyz_mapping’.
The so-obtained point cloud can be additionally meshed in a post-processing step. The object model returned
in ObjectModel3D then contains the description of the mesh. The used meshing algorithm depends on the
type of the stereo model. For a stereo model of type ’surface_pairwise’, only a Poisson solver is supported
which can be activated by setting the parameter ’point_meshing’ to ’poisson’. It creates a water-tight mesh,
therefore surface regions with missing data are covered by an interpolated mesh.
’surface_fusion’ If the stereo model is of type ’surface_fusion’, the point cloud obtained as described above is
processed further. The goal is to obtain a preferably smooth surface while keeping form fidelity. To this
end, the bounding box is sampled and each sample point is assigned a distance to a so-called isosurface
(consisting of points with distance 0). The final distance values (and thus the isosurface) are obtained by
minimizing an error function based on the points resulting from pairwise reconstruction. This leads to a
fusion of the reconstructed point clouds of all camera pairs (see the second paper in References below).
The calculation of the isosurface can be influenced by set_stereo_model_param with the parameters
’resolution’, ’surface_tolerance’, ’min_thickness’ and ’smoothing’. The distance between sample points in
the bounding box (in each coordinate direction) can be set by the parameter ’resolution’. The parameter
’smoothing’ regulates the ’jumpiness’ of the distance function by weighting the two terms in the error func-
tion: Fidelity to the initial point clouds obtained by pairwise reconstruction on the one hand, total variation
of the distance function on the other hand. Note that the actual value of ’smoothing’ for a given data set to be
visually pleasing has to be found by trial and error. Too small values lead to integrating many outliers into the
surface even if the object surface then exhibits many jumps. Too large values lead to loss of fidelity towards
the point cloud of pairwise reconstruction. Fidelity to the initial surfaces obtained by pairwise reconstruction
is not maintained in the entire bounding box, but only in cones of sight of cameras to the initial surface. A
sample point in such a cone is considered surely outside of the object (in front of the surface) or surely inside
the object (behind the surface) with respect to the given camera if its distance to the initial surface exceeds a
given value which can be set by the parameter ’surface_tolerance’. The length of considered cones behind
the initial surface can roughly be set by the parameter ’min_thickness’ (see set_stereo_model_param
for more details). ’min_thickness’ always has to be larger than or equal to ’surface_tolerance’.

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 321

surface_tolerance

min_thickness

(1) (2)
The parameters ’surface_tolerance’ and ’min_thickness’ regulate the fidelity to the initial surface obtained by
pairwise reconstruction. Points in a cone of sight of a camera are considered surely outside of the object (in
front of the surface) or surely inside the object (behind the surface) with respect to the given camera if their
distance to the initial surface exceeds ’surface_tolerance’. Points behind the surface (viewed from the given
camera) are only considered to lie inside the object if their distance to the initial surface does not exceed
’min_thickness’.

Each 3D point of the object model returned in ObjectModel3D is extracted from the isosurface where the
distance function equals zero. Its normal vector is calculated from the gradient of the distance function. While
the method ’surface_fusion’ requires the setting of more parameters than simple pairwise reconstruction,
post-processing of the obtained point cloud representing the object surface will probably get a lot simpler.
In particular, suppression of outliers, smoothing, equidistant sub-sampling and hole filling can be handled
nicely and often in high quality by this method. The same can be said about the possible internal meshing of
the output surface, see the next paragraph. Note that the algorithm will try to produce a closed surface. If the
object is only observed and reconstructed from one side, the far end of the bounding box usually determines
where the object is cut off. The method ’surface_fusion’ may take considerably longer than simple pairwise
reconstruction, depending mainly on the parameter ’resolution’.
Additionally, the so-obtained point cloud can be meshed in a post-processing step. The object model returned
in ObjectModel3D then contains the description of the mesh. For a stereo model of type ’surface_fusion’,
the algorithm ’marching tetrahedra’ is used which can be activated by setting the parameter ’point_meshing’
to ’isosurface’. The wanted meshed surface is extracted as the isosurface where the distance function equals
zero. Note that there are more points in ObjectModel3D if meshing of the isosurface is enabled even if
the used ’resolution’ is the same.

Coloring the 3D object model


It is possible to provide color information for 3D object models that have been reconstructed with
reconstruct_surface_stereo from the input images. The computation of the color depends on the cho-
sen method set with set_stereo_model_param (see explanation in the list there). Each 3D point is assigned
a color value consisting of a red, green and blue channel which are stored as attributes named ’red’, ’green’ and
’blue’ in the output 3D object model ObjectModel3D. These attributes can for example be used in the procedure
visualize_object_model_3d with GenParamName = ’red_channel_attrib’, ’green_channel_attrib’ and
’blue_channel_attrib’. They can also be queried with get_object_model_3d_params or be processed with
select_points_object_model_3d or other operators that use extended attributes. If the reconstruction
has been performed using gray value images, the color value for the three channels is identical. If multi-channel
images are used, the reconstruction is performed using the first channel only. The remaining channels are solely
used for the calculation of the color values.
If stereo models of type ’surface_fusion’ are used, the reconstruction will contain points without a direct corre-
spondence to points in the images. These points are not seen by any of the cameras of the stereo system and
are therefore "invisible". A color value for these points is derived by assigning the value of the nearest visible
neighbor. Normally, the nearest neighbor search is not very time-consuming and can remain active. However, it
may happen that the value for the parameter ’resolution’ is considerably finer than the available image resolution.
In this case, many invisible 3D points are reconstructed making the nearest neighbor search very time consum-
ing. In order to avoid an increased runtime, it is recommended to either adapt the value of ’resolution’ or to
switch off the calculation for invisible points. This can be done by calling set_stereo_model_param with
GenParamName=’color_invisible’ and GenParamValue= ’false’. In this case, invisible points are assigned
255 as gray value.
Troubleshooting for the configuration of a stereo model

HALCON 24.11.1.0
322 CHAPTER 5 3D RECONSTRUCTION

The proper configuration of a stereo model is not always easy. Please follow the workflow above. If the recon-
struction results are not satisfactory, please consult the following hints and ideas:

Run in persistence mode If you enable the ’persistence’ mode of stereo model (call
set_stereo_model_param with GenParamName=’persistence’) a successive call to
reconstruct_surface_stereo will store intermediate iconic results, which provide addi-
tional information. They can be accessed by get_stereo_model_object_model_3d and
get_stereo_model_object.
Check the quality of the calibration
• If the camera setup was obtained by calibrate_cameras, it stores some quality information about
the camera calibration in form of standard deviations of the camera internal parameters. This informa-
tion is then carried in the camera setup model associated with the stereo model. It can be queried by
first calling get_stereo_model_param with GenParamName=’camera_setup_model’ and then
inspecting the camera parameter standard deviations by calling get_camera_setup_param with
GenParamName=’params_deviations’. Unusually big standard deviation values might indicate a bad
camera calibration.
• After setting the stereo model ’persistence’ mode, we recommend inspecting the rectified images for
each image pair. The rectified images are returned by get_stereo_model_object with a camera
index pair [From, To] specifying the pair of interest in the parameter PairIndex and the val-
ues ’from_image_rect’ and ’to_image_rect’ in ObjectName, respectively. If the images are properly
rectified, all corresponding image features must appear in the same row in both rectified images. A
discrepancy of several rows is a serious indication for a bad camera calibration.
Inspect the used bounding box Make sure that the bounding box is tight around the volume of interest. If the
parameters ’min_disparity’ and ’max_disparity’ are not set manually by using create_stereo_model
or set_stereo_model_param, the algorithm uses the projection of the bounding box into both im-
ages of each image pair in order to estimate the values for MinDisparity and MaxDisparity, which
in turn are used in the internal call to binocular_disparity and binocular_disparity_ms.
These values can be queried using get_stereo_model_param and if needed, can be adapted using
set_stereo_model_param. If the disparity values are set manually, the bounding box is only used
to restrict the reconstructed 3D points. In the case of using binocular_disparity_mg as disparity
method, suitable values for the parameters InitialGuess and ’initial_level’ are derived from the bound-
ing box. However, these values can also be reset using set_stereo_model_param. Use the procedures
gen_bounding_box_object_model_3d to create a 3D object model of your stereo model, and in-
spect it in conjunction with the reconstructed 3D object model to verify the bounding box visually.
Improve the quality of the disparity images After setting the stereo model ’persistence’ mode (see above),
inspect the disparity and the score images for each image pair. They are returned by
get_stereo_model_object with a camera index pair [From, To] specifying the pair of inter-
est in the parameter PairIndex and the values ’disparity_image’ and ’score_image’ in ObjectName,
respectively. If both images exhibit significant imperfection (e.g., the disparity image does not re-
ally resemble the shape of the object seen in the image), try to adjust the parameters used for the
internal call to binocular_disparity (the parameters with a ’binocular_’ prefix) by modifying
set_stereo_model_param until some improvement is achieved.
Alternatively, a different method to calculate the disparities can be used. Besides the above-
mentioned internal call of binocular_disparity, HALCON also provides the two other methods
binocular_disparity_mg and binocular_disparity_ms. These methods feature e.g., the cal-
culation of disparities in textureless regions at an expanse of the reconstruction time if compared with cross-
correlation methods. However, for these methods, it can be necessary to adapt the parameters to the un-
derlying dataset as well. Dependent on the chosen method, the user can either set the parameters with a
’binocular_mg_’ or a ’binocular_ms_’ prefix until some improvement is achieved.
A detailed description of the provided methods and their parameters can be found in
binocular_disparity, binocular_disparity_mg or binocular_disparity_ms, re-
spectively.
Fusion parameters If the result of pairwise reconstruction as inspected by
get_stereo_model_object_model_3d can not be improved anymore, begin to adapt the fu-
sion parameters. For a description of parameters see also set_stereo_model_param. Note that
the pairwise reconstruction is sometimes not discernible when the fusion algorithm can still tweak it into
something sensible. In any case, pairwise reconstruction should yield enough points as input for the fusion
algorithm.

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 323

Runtime
In order to improve the runtime, consider the following hints:

Extent of the bounding box The bounding box should be tight around the volume of interest. Else, the runtime
will increase unnecessarily and - for the method ’surface_fusion’ - drastically.
Reduce the domain of the input images Reducing the domain of the input images (e.g., with reduce_domain)
to the relevant part of the image may heavily speed up the algorithm, especially for large images.
Sub-sampling in the rectification step The stereo model parameter ’rectif_sub_sampling’ (see
set_stereo_model_param) controls the sub-sampling in the rectification step. Setting this fac-
tor to a value > 1.0 will reduce the resolution of the rectified images compared to the original images. This
factor has a direct impact on the succeeding performance of the chosen disparity method, but it causes
loss of image detail. The parameter ’rectif_interpolation’ could have also some impact, but typically not a
significant one.
Disparity parameters There is a trade-off between completeness of the pairwise surface reconstruction on the
one hand and reconstruction runtime on the other. The stereo model offers three different methods to
calculate the disparity images. Dependent on the chosen method, the stereo model provides a particu-
lar set of parameters that enables a precise adaption of the method to the used dataset. If the method
binocular_disparity is selected, only parameters with a ’binocular_’ prefix can be set. For the
method binocular_disparity_mg, all settable parameters have to exhibit the prefix ’binocular_mg_’,
whereas for the method binocular_disparity_ms only parameters with ’binocular_ms_’ are applica-
ble.
Parameters using the method binocular_disparity:

• NumLevels
• MaskWidth
• MaskHeight
• Filter
• SubDisparity
Each of these parameters of binocular_disparity has a corresponding stereo model parameter
written in snake case and with the prefix ’binocular_’, and has, some more or others less, impact on the
performance. Adapting them properly could improve the performance. performance.
Parameters using the method binocular_disparity_mg:

• GrayConstancy
• GradientConstancy
• Smoothness
• InitialGuess
• ’mg_solver’
• ’mg_cycle_type’
• ’mg_pre_relax’
• ’mg_post_relax’
• ’initial_level’
• ’iterations’
• ’pyramid_factor’
Each of these parameters of binocular_disparity_mg has a corresponding stereo model parame-
ter written in snake case and with the prefix ’binocular_mg_’, and has, some more or others less, impact
on the performance and the result. Adapting them properly could improve the performance.
Parameters using the method binocular_disparity_ms:

• SurfaceSmoothing
• EdgeSmoothing
• ’consistency_check’
• ’similarity_measure’

HALCON 24.11.1.0
324 CHAPTER 5 3D RECONSTRUCTION

• ’sub_disparity’
Each of these parameters of binocular_disparity_ms has a corresponding stereo model parame-
ter written in snake case and with the prefix ’binocular_ms_’, and has, some more or others less, impact
on the performance and the result. Adapting them properly could improve the performance.
Reconstruct only points with high disparity score Besides adapting the sub-sampling it is also possible to ex-
clude points of the 3D reconstruction because of their computed disparity score. In order to do this, the
user should first query the score images for the disparity values by calling get_stereo_model_object
using GenParamName = ’score_image’. Dependent on the distribution of these values, the user can de-
cide whether disparities with a score beneath a certain threshold should be excluded from the reconstruc-
tion. This can be achieved with set_stereo_model_param using either GenParamName = ’binoc-
ular_score_thresh’. The advantage of excluding points of the reconstruction is a slight speed-up since it is
not necessary to process the entire dataset. As an alternative to the above-mentioned procedure, it is also
possible to exclude points after executing reconstruct_surface_stereo by filtering reconstructed
3D points. The advantage of this is that at the expense of a slightly increased runtime, a second call to
reconstruct_surface_stereo is not necessary.
Sub-sampling of X,Y,Z data For the method ’surface_pairwise’, you can use a larger sub-sampling
step for the X,Y,Z data in the last step of the reconstruction algorithm by modifying
GenParamName=’sub_sampling_step’ with set_stereo_model_param. The reconstructed data
will be much sparser, thus speeding up the post-processing.
Fusion parameters For the method ’surface_fusion’, enlarging the parameter ’resolution’ will speed up the exe-
cution considerably.

Parameters
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage-array ; object : byte
An image array acquired by the camera setup associated with the stereo model.
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle to the resulting surface.
Execution Information

• Supports OpenCL compute devices.


• Multithreading type: reentrant (runs in parallel with non-exclusive operators).
• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_stereo_model, get_calib_data, set_stereo_model_image_pairs
Possible Successors
get_stereo_model_object_model_3d
Alternatives
reconstruct_points_stereo
References
M. Kazhdan, M. Bolitho, and H. Hoppe: “Poisson Surface Reconstruction.” Symposium on Geometry Processing
(June 2006).,
C. Zach, T. Pock, and H. Bischof: “A globally optimal algorithm for robust TV-L1 range image integration.”
Proceedings of IEEE International Conference on Computer Vision (ICCV 2007).
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 325

set_stereo_model_image_pairs ( : : StereoModelID, From, To : )

Specify image pairs to be used for surface stereo reconstruction.


The operator set_stereo_model_image_pairs stores a list of image pairs for a stereo model
StereoModelID of type ’surface_pairwise’ or ’surface_fusion’. Calling the operator for a model of another
type will raise an error. In the mode ’surface_pairwise’ or ’surface_fusion’, surfaces are reconstructed by com-
puting disparity images for image pairs. You specify these image pairs by passing tuples of camera indices in the
parameters From and To. Then, e.g., the disparity image from the camera with index From[0] to the camera
with index To[0] is computed and so on.
The camera indices must be valid for the camera setup model assigned to the stereo model (see
create_stereo_model), otherwise an error is returned. If an image pairs list already exists in the stereo
model, it is substituted by the current one.
Besides storing the list of image pairs, the operator set_stereo_model_image_pairs pre-
pares a pair of rectification image maps for each image pair, which are used repeatedly in suc-
cessive calls to reconstruct_surface_stereo to rectify the images to a normalized binocu-
lar stereo pair; refer to gen_binocular_rectification_map for further details. Three of the
gen_binocular_rectification_map parameters are exported as stereo model parameters and can be
modified by set_stereo_model_param or just inspected by get_stereo_model_param:

’rectif_interpolation’: Interpolation mode corresponding to the parameter MapType of


gen_binocular_rectification_map.
’rectif_sub_sampling’: sub-sampling factor corresponding to the parameter SubSampling of
gen_binocular_rectification_map.
’rectif_method’: Rectification method corresponding to the parameter Method of
gen_binocular_rectification_map.

Note that after modifying these parameters, set_stereo_model_image_pairs must be executed again for
the changes to take effect.
The current list of image pairs in the model can be inspected by get_stereo_model_image_pairs.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. From (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Camera indices for the from cameras in the image pairs.
Number of elements: From > 0
. To (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Camera indices for the to cameras in the image pairs.
Number of elements: To == From
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• StereoModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_stereo_model
Possible Successors
reconstruct_surface_stereo

HALCON 24.11.1.0
326 CHAPTER 5 3D RECONSTRUCTION

See also
set_stereo_model_param, get_stereo_model_image_pairs
Module
3D Metrology

set_stereo_model_param ( : : StereoModelID, GenParamName,


GenParamValue : )

Set stereo model parameters.


The operator set_stereo_model_param can be used to set diverse parameters for the stereo model
StereoModelID. Several types of parameters can be set with this operator depending on type of the stereo
model which was specified in create_stereo_model. Note that no specific parameters are provided for
’points_3d’.
General parameters:
By setting GenParamName to one of the following values, general stereo model parameters can be set to the
value passed in GenParamValue:

’bounding_box’: A tuple [x1,y1,z1,x2,y2,z2] specifying two opposite corner points P1=[x1,y1,z1]


and P2=[x2,y2,z2] of a bounding box for the reconstructions. The bounding box defines a box in
the space in the coordinate frame of the camera setup model used for the reconstruction (specified by
CameraSetupModelID in create_stereo_model). The reconstruction algorithms then clip any
resulting reconstruction to this bounding box.
Furthermore, if the parameters ’min_disparity’ and ’max_disparity’ are not set manu-
ally by using create_stereo_model or set_stereo_model_param, the opera-
tor reconstruct_surface_stereo requires a valid bounding box for the estimation
of the minimal and maximal disparity parameters for the pairwise disparity estimation (see
set_stereo_model_image_pairs for more details).
Note that the values of parameters for the fusion of surfaces are reset to default values each time the bounding
box is reset.
You can use the procedure estimate_bounding_box_3d_reconstruction to get initial values for
the bounding box of your 3D reconstruction. This bounding box is based on the pose of a reference calibration
plate and the cones of sight of the cameras. Later, the bounding box should be set as tight as possible around
the object that is to be to be reconstructed.
Additionally, the procedures gen_bounding_box_object_model_3d and
gen_camera_setup_object_model_3d can be used to visualize your camera setup.
For a valid bounding box, P1 must be the point on the front lower left corner and P2 on the back upper
right corner of the bounding box, i.e., x1<x2, y1<y2 and z1<z2. While the surface reconstruction (see
reconstruct_surface_stereo) will terminate in the case of an invalid bounding box, the 3D point
reconstruction algorithm (see reconstruct_points_stereo) simply ignores it, and it reconstructs all
points (it can), without clipping them. Thus, you can turn off the result clipping for the 3D points reconstruc-
tion by passing the tuple [0,0,0,0,0,-1].
Note that because ’bounding_box’ is a tuple-valued parameter, it cannot be set in a single call of
set_stereo_model_param together with other model parameters (see the paragraph "A note on tuple-
valued model parameters" below).
Tuple format: [x1,y1,z1,x2,y2,z2]
’persistence’: Enables (GenParamValue=1) or disables (GenParamValue=0) the ’persistence’ mode of the
stereo model. When in persistence mode, the model stores intermediate results of the reconstruction (only for
reconstruct_surface_stereo), which can be inspected later by get_stereo_model_object
and get_stereo_model_object_model_3d.
Note that the model might need significant memory space in this mode. This can worsen the performance
of the reconstruction algorithms and even lead to running out of memory, in particular for setups with many
cameras and/or large images. Therefore, we recommend to enable this mode only for inspection and debug-
ging a reconstruction with small data sets.
List of values: 0 , 1.
Default: 0.

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 327

Parameters for the surface reconstruction using ’surface_pairwise’ or ’surface_fusion’:


By setting GenParamName to one of the following values, additional parameters specific for surface reconstruc-
tion can be set with GenParamValue for a stereo model of type ’surface_pairwise’ or ’surface_fusion’:

’color’: By setting this parameter to one of the following values, the coloring of the reconstructed 3D object
model is either enabled or disabled (’none’). See reconstruct_surface_stereo on how to access
the resulting color information.
’median’ The color value of a 3D point is the median of the color values of all cameras where the 3D point
is visible.
’smallest_distance’ The color value of a 3D point corresponds to the color value of the camera that exhibits
the smallest distance to this 3D point.
’mean_weighted_distances’ All cameras that contribute to the reconstruction of a 3D point are weighted
according to their distance to the 3D point. Cameras with a smaller distance receive a higher weight,
whereas cameras with a larger distance get a lower weight. The color value of a 3D point is then
computed by averaging the weighted color values of the cameras.
’line_of_sight’ The color value of a 3D point corresponds to the color value of the camera that exhibits the
smallest angle between the point normal and the line of sight.
’mean_weighted_lines_of_sight’ All cameras that contribute to the reconstruction of a 3D point are weighted
according to their angle between the point normal and the line of sight. Cameras with a smaller angle
receive a higher weight. The color value of a 3D point is then computed by averaging the weighted color
values of the cameras.
List of values: ’none’, ’smallest_distance’, ’mean_weighted_distances’, ’line_of_sight’,
’mean_weighted_lines_of_sight’, ’median’.
Default: ’none’.
’color_invisible’: If stereo models of type ’surface_fusion’ are used, the reconstruction will contain points without
a direct correspondence to points in the images. These points are not seen by any of the cameras of the stereo
system and are therefore "invisible". A color value for these points has to be calculated using the color
of points in the vicinity. Coloring these "invisible" points can be switched off by setting this parameter
to ’false’. In this case invisible points are assigned 255 as gray value. Normally, coloring of "invisible"
points is not very time-consuming and can remain active. However, it may happen that the value for the
parameter ’resolution’ is considerably finer than the available image resolution. In this case, many invisible
3D points are reconstructed making the nearest neighbor search very time consuming. In order to avoid an
increased runtime, it is recommended to either adapt the value of ’resolution’ or to switch off the calculation
for invisible points. Please note that for stereo models of type ’surface_pairwise’, this parameter will not
have any effect.
List of values: ’true’, ’false’.
Default: ’true’.
’rectif_interpolation’: Interpolation mode for the rectification maps (see
set_stereo_model_image_pairs). Note that after changing this parameter, you must call
set_stereo_model_image_pairs again for the changes to take effect.
List of values: ’none’, ’bilinear’.
Default: ’bilinear’.
’rectif_sub_sampling’: Sub-sampling factor for the rectification maps (see
set_stereo_model_image_pairs). Note that after changing this parameter, you must call
set_stereo_model_image_pairs again for the changes to take effect.
Suggested values: 0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0.
Default: 1.0.
’rectif_method’: Rectification method for the rectification maps (see set_stereo_model_image_pairs).
Note that after changing this parameter, you must call set_stereo_model_image_pairs again for
the changes to take effect.
List of values: ’viewing_direction’, ’geometric’.
Default: ’viewing_direction’.
’disparity_method’: Method used to create disparity images from the image pairs (see
reconstruct_surface_stereo). Currently, the three methods ’binocular’, ’binocular_mg’
and ’binocular_ms’ are supported. Dependent on the chosen method, the HALCON operator

HALCON 24.11.1.0
328 CHAPTER 5 3D RECONSTRUCTION

binocular_disparity, binocular_disparity_mg or binocular_disparity_ms is


called internally.
List of values: ’binocular’, ’binocular_mg’, ’binocular_ms’.
Default: ’binocular’.
’min_disparity’, ’max_disparity’: Minimum and maximum disparity values that are used in the operator
reconstruct_surface_stereo. The number of minimum and maximum disparity values must cor-
respond to the number of image pairs. If ’min_disparity’ and ’max_disparity’ are not set by the opera-
tor set_stereo_model_param, the disparity values are estimated internally by using the underlying
bounding box.
Note that because ’min_disparity’ and ’max_disparity’ are tuple-valued parameters, they cannot be set in a
single call of set_stereo_model_param together with other model parameters (see the paragraph "A
note on tuple-valued model parameters" below).
’binocular_score_thresh’: For the methods ’binocular_mg’ and ’binocular_ms’ the disparities that have a score
above the passed threshold are excluded from further processing steps and do not end up in the reconstructed
3D point cloud. For the method ’binocular’ the disparities below the passed threshold are excluded.
For stereo models with the method ’binocular’: List of values: positive and negative integer or float value.
Default: 0.5.
For stereo models with the method ’binocular_mg’ or ’binocular_ms’: List of values: integer or float
value greater or equal to 0.0.
Default: -1.
Depending on the selected disparity method, a set of different parameters is available for the user. These
parameters allow a fine tuning to the used data set. More information about the parameters can be found
in the respective operator reference of binocular_disparity, binocular_disparity_mg or
binocular_disparity_ms.
Set of parameters for stereo models with method = ’binocular’

’binocular_method’: Sets the desired matching method.


List of values: ’ncc’, ’sad’, ’ssd’.
Default: ’ncc’.
’binocular_num_levels’: Number of used image pyramids. List of values: integer value greater or equal
to 1.
Default: 1.
’binocular_mask_width’: Width of the correlation window.
List of values: Odd integer value greater or equal to 3.
Default: 11.
’binocular_mask_height’: Height of the correlation window.
List of values: Odd integer value greater or equal to 3.
Default: 11.
’binocular_texture_thresh’: Variance threshold of textured image regions.
List of values: integer or float value greater or equal to 0.0.
Default: 0.0.
’binocular_filter’: Downstream filters.
List of values: ’none’, ’left_right_check’.
Default: ’none’.
’binocular_sub_disparity’: Subpixel interpolation of disparities.
List of values: ’none’, ’interpolation’.
Default: ’none’.
Set of parameters for stereo models with method = ’binocular_mg’

’binocular_mg_gray_constancy’: Weight of the gray value constancy in the data term.


List of values: integer or float value greater or equal to 0.0.
Default: 1.0.
’binocular_mg_gradient_constancy’: Weight of the gradient constancy in the data term.
List of values: integer or float value greater or equal to 0.0.
Default: 30.0.

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 329

’binocular_mg_smoothness’: Weight of the smoothness term in relation to the data term.


List of values: integer or float value greater 0.0.
Default: 5.0.
’binocular_mg_initial_guess’: Initial guess of the disparity.
List of values: integer or float value.
Default: 0.0.
The subsequent parameters control the behavior of the used multigrid method.
’binocular_mg_default_parameters’: Sets predefined values for the following parameters of the used
multigrid method: ’binocular_mg_solver’, ’binocular_mg_cycle_type’, ’binocular_mg_pre_relax’,
’binocular_mg_post_relax’, ’binocular_mg_initial_level’, ’binocular_mg_iterations’, ’binocu-
lar_mg_pyramid_factor’. The exact values of these parameters can be found in the operator reference
of binocular_disparity_mg.
List of values: ’very_accurate’, ’accurate’, ’fast_accurate’, ’fast’.
Default: ’fast_accurate’.
’binocular_mg_solver’: Solver for the linear system.
List of values: ’multigrid’, ’full_multigrid’, ’gauss_seidel’.
Default: ’full_multigrid’.
’binocular_mg_cycle_type’: Selects the type of recursion for the multigrid solvers.
List of values: ’v’,’w’, ’none’.
Default: ’v’.
’binocular_mg_pre_relax’: Sets the number of iterations of the pre-relaxation step in multigrid solvers, or
the number of iterations for the Gauss-Seidel solver, depending on which is selected.
List of values: integer or float value greater 0.0.
Default: 1.
’binocular_mg_post_relax’: Sets the number of iterations of the post-relaxation step.
List of values: integer or float value.
Default: 1.
’binocular_mg_initial_level’: Sets the coarsest level of the image pyramid where the coarse-to-fine process
starts.
List of values: integer value.
Default: -2.
’binocular_mg_iterations’: Sets the number of iterations of the fixed point iteration per pyramid level.
List of values: integer or float value greater or equal to 0.
Default: 1
’binocular_mg_pyramid_factor’: Determines the factor by which the images are scaled when creating the
image pyramid for the coarse-to-fine process.
List of values: integer or float value between 0.1 and 0.9.
Default: 0.6.
Set of parameters for stereo models with method = ’binocular_ms’

’binocular_ms_surface_smoothing’: Smoothing of surfaces.


List of values: integer value greater or equal to 0.
Default: 50.
’binocular_ms_edge_smoothing’: Smoothing of edges.
List of values: integer value greater or equal to 0.
Default: 50.
’binocular_ms_consistency_check’: This parameter increases the robustness of the returned matches
since the result relies on a concurrent direct and reverse match.
List of values: ’true’, ’false’.
Default: ’true’.
’binocular_ms_similarity_measure’: Sets the method of the similarity measure.
List of values: ’census_dense’, ’census_sparse’.
Default: ’census_dense’.
’binocular_ms_sub_disparity’: Enables or disables the sub-pixel refinement of disparities.
List of values: ’true’, ’false’.
Default: ’true’.

HALCON 24.11.1.0
330 CHAPTER 5 3D RECONSTRUCTION

’point_meshing’: Enables the post-processing step for meshing the reconstructed surface points. For a stereo
model of type ’surface_pairwise’, a Poisson solver is supported. For a stereo model of type ’surface_fusion’,
a meshing of the isosurface is supported (see reconstruct_surface_stereo for more details).
List of values: ’none’, ’poisson’, ’isosurface’.
Default: ’none’.
If the Poisson-based meshing is enabled, the following parameters can be set:
• ’poisson_depth’: Depth of the solver octree. More detail (i.e., a higher resolution) of the resulting mesh
is achieved with deeper trees. However, this requires more time and memory.
Suggested values:
6, 8, 10.
Default: 8.
Restriction: 3 <= ’poisson_depth’ <= 12
• ’poisson_solver_divide’: Depth of block Gauss-Seidel solver used for solving the Poisson equation. At
the price of a small time overhead, this parameter reduces the memory consumption of the underlying
meshing algorithm. Proposed values are depths by 0 to 2 smaller compared to the main octree depth.
Suggested values: 6, 8, 10.
Default: 8.
Restriction: 3 <= ’poisson_solver_divide’ <= ’poisson_depth’
• ’poisson_samples_per_node’: Minimum number of points that should fall in a single octree leaf. This
parameter is used to handle noisy data, e.g., noise-free data can be distributed over many leaves, whereas
more noisy data should be stored in a single leaf to compensate for the noise. As a side effect, bigger
values of this parameter distribute the data in fewer leaves, which results in a smaller octree, which
means a speedup but possibly less detail of the reconstruction.
Suggested values: 1, 5, 10, 30, 40.
Default: 30.

Parameters only for ’surface_pairwise’:


By setting GenParamName to one of the following values, additional parameters specific for surface reconstruc-
tion can be set with GenParamValue for a stereo model of type ’surface_pairwise’:

’sub_sampling_step’: sub-sampling step for the X, Y and Z image data resulting from the pairwise
disparity estimation, before this data is used in its turn for the surface reconstruction (see
reconstruct_surface_stereo).
Suggested values: 1, 2, 3.
Default: 2.

Parameters only for ’surface_fusion’:


By setting GenParamName to one of the following values, additional parameters specific for surface reconstruc-
tion can be set with GenParamValue for a stereo model of type ’surface_fusion’:

’resolution’: Distance of neighboring sample points in each coordinate direction in discretization of bounding box.
’resolution’ is set in [m]. See reconstruct_surface_stereo for more details.
Too small values will unnecessarily increase the runtime. Too large values will lead to a reconstruction with
too few details. Per default, it is set to a coarse resolution depending on the bounding box. The parameter
will be reset if the bounding box is reset.
’smoothing’ may need to be adapted when ’resolution’ is changed.
’surface_tolerance’ should always be a bit larger than ’resolution’ in order to avoid effects of discretization.
Suggested values: 0.001, 0.01
’surface_tolerance’: Specifies how much noise around the input point cloud should be combined to a sur-
face. Points in a cone of sight of a camera are considered surely outside of the object (in front
of the surface) or surely inside the object (behind the surface) with respect to the given camera if
their distance to the initial surface exceeds ’surface_tolerance’. ’surface_tolerance’ is set in [m]. See
reconstruct_surface_stereo for more details and a figure.
Too small values lead to an uneven surface. Too large values smudge distinct surfaces into one. Per default,
it is set to three times ’resolution’. The parameter will be reset if the bounding box is reset.
’surface_tolerance’ should always be a bit larger than ’resolution’ in order to avoid effects of discretization.
’min_thickness’ always has to be larger than or equal to ’surface_tolerance’. If ’min_thickness’ is set too

HALCON/HDevelop Reference Manual, 2024-11-13


5.3. MULTI-VIEW STEREO 331

small, ’surface_tolerance’ is automatically set to the same value as ’min_thickness’. If ’surface_tolerance’ is


set too big, an error is raised.
Suggested values: 0.003, 0.03
Restriction: ’surface_tolerance’ < ’min_thickness’
’min_thickness’: Length of considered cone of sight of a camera behind the initial surface obtained by pairwise
reconstruction. Points behind the surface (viewed from the given camera) are only considered to lie inside
the object if their distance to the initial surface does not exceed ’min_thickness’. ’min_thickness’ is set in
[m]. See reconstruct_surface_stereo for more details and a figure.
If lines of sight are expected to intersect the closed object only once (cameras all observe the object head-on
from one side), this parameter should remain at the very large default setting.
If lines of sight are expected to intersect the object more often (cameras observe the object from different
sides), only the interior of the object of interest should be marked as lying behind the surface. Thus, a first
guess for the parameter could be less than the thickness of your object.
The method ’surface_fusion’ will try to produce a closed surface. If you observe several distinct objects from
only one side, you may want to reduce the parameter ’min_thickness’ to restrict the depth of reconstructed
objects and thus keep them from being smudged into one surface. The backside of the objects is not observed
and thus its reconstruction will probably be incorrect.
Too small values can result in holes in the reconstructed point cloud or double walls. Too large values can
result in a distorted point cloud or blow up the surface towards the outside of the object (if the surface is
blown up beyond the bounding box, no points will be reconstructed). Per default set to the diameter of the
bounding box. The parameter will be reset if the bounding box is reset.
’min_thickness’ always has to be larger than or equal to ’surface_tolerance’. If ’min_thickness’ is set too
small, ’surface_tolerance’ is automatically set to the same value as ’min_thickness’. If ’surface_tolerance’ is
set too big, an error is raised.
Suggested values: 0.005, 0.05.
’smoothing’: The parameter ’smoothing’ determines how important a small total variation of the distance func-
tion is compared to data fidelity. Thus, ’smoothing’ regulates the ’jumpiness’ of the resulting surface (see
reconstruct_surface_stereo for more details).
Note that the actual value of ’smoothing’ for a given data set to be visually pleasing has to be found by
try and error. Too small values lead to integrating many outliers into the surface even if the surface then
exhibits many jumps. Too large values lead to lost fidelity towards the point clouds of pairwise reconstruction
(how the algorithm views distances to the input point clouds depends heavily on ’surface_tolerance’ and
’min_thickness’).
The parameter will be reset if the bounding box is reset. ’smoothing’ may need to be adapted when ’resolu-
tion’ is changed.
Suggested values: 15.0, 1.0, 0.1.
Default: 1.0.

All parameters except ’binocular_mg_default_parameters’ can be read back by get_stereo_model_param.


A note on tuple-valued model parameters
Most of the stereo model parameters are single-valued. Thus, you can provide a list (i.e., tuple) of parameter names
and a list (tuple) of values that has the same length as the input tuple. In contrast, when setting a tuple-valued pa-
rameter, you must pass a tuple of values. When setting such a parameter together with other parameters, the value-
to-parameter-name correspondence is not obvious anymore. Thus, tuple-valued parameters like ’bounding_box’,
’min_disparity’ or ’max_disparity’ should always be set in a separate call to set_stereo_model_param.
Parameters

. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle


Handle of the stereo model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the parameters to be set.
List of values: GenParamName ∈ {’bounding_box’, ’persistence’, ’sub_sampling_step’,
’rectif_interpolation’, ’rectif_sub_sampling’, ’rectif_method’, ’disparity_method’, ’binocular_method’,
’binocular_num_levels’, ’binocular_mask_width’, ’binocular_mask_height’, ’binocular_texture_thresh’,
’binocular_score_thresh’, ’binocular_filter’, ’binocular_sub_disparity’, ’binocular_mg_gray_constancy’,
’binocular_mg_gradient_constancy’, ’binocular_mg_smoothness’, ’binocular_mg_initial_guess’,

HALCON 24.11.1.0
332 CHAPTER 5 3D RECONSTRUCTION

’binocular_mg_default_parameters’, ’binocular_mg_solver’, ’binocular_mg_cycle_type’,


’binocular_mg_pre_relax’, ’binocular_mg_post_relax’, ’binocular_mg_initial_level’,
’binocular_mg_iterations’, ’binocular_mg_pyramid_factor’, ’binocular_ms_surface_smoothing’,
’binocular_ms_edge_smoothing’, ’binocular_ms_consistency_check’, ’binocular_ms_similarity_measure’,
’binocular_ms_sub_disparity’, ’point_meshing’, ’poisson_depth’, ’poisson_solver_divide’,
’poisson_samples_per_node’, ’resolution’, ’surface_tolerance’, ’min_thickness’, ’smoothing’, ’color’,
’color_invisible’, ’min_disparity’, ’max_disparity’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Values of the parameters to be set.
Suggested values: GenParamValue ∈ {1, -2, -5, 0, 0.3, 0.5, 0.9, 1, 2, 3, ’census_dense’, ’census_sparse’,
’binocular’, ’ncc’, ’none’, ’sad’, ’ssd’, ’bilinear’, ’false’, ’viewing_direction’, ’geometric’, ’very_accurate’,
’accurate’, ’fast_accurate’, ’fast’, ’v’, ’w’, ’none’, ’gauss_seidel’, ’multigrid’, ’true’, ’poisson’, ’isosurface’,
’interpolation’, ’left_right_check’, ’full_multigrid’, ’binocular_mg’, ’binocular_ms’, ’smallest_distance’,
’mean_weighted_distances’, ’line_of_sight’, ’mean_weighted_lines_of_sight’, ’median’}
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• StereoModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_stereo_model
Possible Successors
reconstruct_surface_stereo, reconstruct_points_stereo
See also
get_stereo_model_param, set_stereo_model_image_pairs
Module
3D Metrology

5.4 Photometric Stereo

estimate_al_am ( Image : : : Albedo, Ambient )

Estimate the albedo of a surface and the amount of ambient light.


estimate_al_am estimates the Albedo of a surface, i.e. the percentage of light reflected by the surface, and
the amount of ambient light Ambient by using the maximum and minimum gray values of the image.
Attention
It is assumed that the image contains at least one point for which the reflection function assumes its minimum, e.g.,
points in shadows. Furthermore, it is assumed that the image contains at least one point for which the reflection
function assumes its maximum. If this is not the case, wrong values will be estimated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image for which albedo and ambient are to be estimated.
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Amount of light reflected by the surface.
. Ambient (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Amount of ambient light.

HALCON/HDevelop Reference Manual, 2024-11-13


5.4. PHOTOMETRIC STEREO 333

Result
estimate_al_am always returns the value 2 (H_MSG_TRUE).
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, photometric_stereo, shade_height_field
Module
3D Metrology

estimate_sl_al_lr ( Image : : : Slant, Albedo )

Estimate the slant of a light source and the albedo of a surface.


estimate_sl_al_lr estimates the Slant of a light source, i.e., the angle between the light source and the
positive z-axis, and the albedo of the surface in the input image Image, i.e. the percentage of light reflected by
the surface, using the algorithm of Lee and Rosenfeld.
Attention
The Albedo is assumed constant for the entire surface depicted in the image.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image for which slant and albedo are to be estimated.
. Slant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; real
Angle between the light sources and the positive z-axis (in degrees).
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Amount of light reflected by the surface.
Result
estimate_sl_al_lr always returns the value 2 (H_MSG_TRUE).
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, photometric_stereo, shade_height_field
Module
3D Metrology

estimate_sl_al_zc ( Image : : : Slant, Albedo )

Estimate the slant of a light source and the albedo of a surface.


estimate_sl_al_zc estimates the Slant of a light source, i.e. the angle between the light source and the
positive z-axis, and the albedo of the surface in the input image Image, i.e. the percentage of light reflected by
the surface, using the algorithm of Zheng and Chellappa.
Attention
The Albedo is assumed constant for the entire surface depicted in the image.

HALCON 24.11.1.0
334 CHAPTER 5 3D RECONSTRUCTION

Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image for which slant and albedo are to be estimated.
. Slant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; real
Angle of the light sources and the positive z-axis (in degrees).
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Amount of light reflected by the surface.
Result
estimate_sl_al_zc always returns the value 2 (H_MSG_TRUE).
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, photometric_stereo, shade_height_field
Module
3D Metrology

estimate_tilt_lr ( Image : : : Tilt )

Estimate the tilt of a light source.


estimate_tilt_lr estimates the tilt of a light source, i.e. the angle between the light source and the x-axis
after projection into the xy-plane, from the image Image using the algorithm of Lee and Rosenfeld.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image for which the tilt is to be estimated.
. Tilt (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; real
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Result
estimate_tilt_lr always returns the value 2 (H_MSG_TRUE).
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, photometric_stereo, shade_height_field
Module
3D Metrology

estimate_tilt_zc ( Image : : : Tilt )

Estimate the tilt of a light source.


estimate_tilt_zc estimates the tilt of a light source, i.e. the angle between the light source and the x-axis
after projection into the xy-plane, from the image Image using the algorithm of Zheng and Chellappa.

HALCON/HDevelop Reference Manual, 2024-11-13


5.4. PHOTOMETRIC STEREO 335

Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image for which the tilt is to be estimated.
. Tilt (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; real
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Result
estimate_tilt_zc always returns the value 2 (H_MSG_TRUE).
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Successors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, photometric_stereo, shade_height_field
Module
3D Metrology

photometric_stereo ( Images : HeightField, Gradient,


Albedo : Slants, Tilts, ResultType, ReconstructionMethod,
GenParamName, GenParamValue : )

Reconstruct a surface according to the photometric stereo technique.


photometric_stereo can be used to separate the three-dimensional shape of an object from its two-
dimensional texture, e.g., its print image. The operator requires at least three images of the same object taken
with different and known directions of illumination. Note, that the point of view of the camera must be the same
for all images.
The three-dimensional shape of the object is primarily computed as the local gradients of the three-dimensional
surface. Those gradients can be further integrated to obtain a height field, i.e., an image in which the pixel values
correspond to a relative height. The two-dimensional texture is called albedo and corresponds to the local light
absorption and reflection characteristics of the surface exclusive of any shading effect.
Typical applications of photometric stereo
Typical applications of photometric stereo are to detect small inconsistencies in a surface that represent, e.g.,
defects, or to exclude the influence of the direction of light from images that are used, e.g., for the print inspection
of non flat characters. Note that photometric stereo is not suitable for the reconstruction of absolute heights, i.e., it
is no alternative to typical 3D reconstruction algorithms like depth from focus or sheet of light.
Limitations of photometric stereo
photometric_stereo is based on the algorithm of Woodham and therefore assumes on the one hand that the
camera performs an orthoscopic projection. That is, you must use a telecentric lens or a lens with a long focal
distance. On the other hand, it assumes that each of the light sources delivers a parallel and uniform beam of light.
That is, you must use telecentric illumination sources with uniform intensity or, as an alternative, distant point light
sources. Additionally, the object must have Lambertian reflectance characteristics, i.e., it must reflect incoming
light in a diffuse way. Objects or regions of an object that have specular reflectance characteristics (i.e., mirroring
or glossy surfaces) cannot be processed correctly and thus lead to erroneous results.
The acquisition setup
The camera with a telecentric lens must be placed orthogonally, i.e., perpendicular, to the scene that should be
reconstructed. The orientation of the camera with respect to the scene must not change during the acquisition of
the images. In contrast, the orientation of the illumination with respect to the camera must change for at least three
gray value images.
Specifying the directions of illumination
For each image, the directions of illumination must be specified as angles within the parameters Slants and
Tilts, which describe the direction of the illumination in relation to the scene. To understand the meaning of the

HALCON 24.11.1.0
336 CHAPTER 5 3D RECONSTRUCTION

parameters Slants and Tilts, remember that the illumination source is assumed to produce parallel light rays,
the camera has a telecentric lens, and the camera is placed orthogonal to the scene to reconstruct:

Slants The Slants angle is the angle between the optical axis of the camera and the direction of the illumina-
tion.

nt
Sla

Side view
Tilts The Tilts angle is measured within the object plane or any plane that is parallel to it, e.g., the image
plane. In particular, it describes the angle between the direction that points from the center of the image to
the right and the direction of light that is projected into the plane. That is, when looking at the image (or the
corresponding scene), a tilt angle of 0 means that the light comes from the right, a tilt angle of 90 means that
the light is coming from the top, a tilt angle of 180 means that the light is coming from the left, etc.
90°

t
Til

180° 0°

270°
Top view

As stated before, photometric stereo requires at least three images with different directions of illumination. How-
ever, the three-dimensional geometry of objects typically leads to shadow casting. In the shadow regions, the
number of effectively available directions of illumination is reduced, which leads to ambiguities. To nevertheless
get a robust result, redundancy is needed. Therefore, typically more than three light sources with different direc-
tions should be used. But note that an increasing number of illumination directions also leads to a higher number

HALCON/HDevelop Reference Manual, 2024-11-13


5.4. PHOTOMETRIC STEREO 337

of images to be processed and therefore to a higher processing time. In most applications, a number of four to six
light sources is reasonable. As a rule of thumb, the slant angles should be chosen between 30° and 60°. The tilt
angles typically should be equally distributed around the object to be measured. Please note that the directions of
illumination must be selected such that they do not lie in the same plane (i.e., the illumination directions must be
independent), otherwise the computing fails and an exception is thrown.
Input images and domains of definition
The input images must be provided in an image array (Images). Each image must have been taken with a different
direction of illumination as stated above. If the images are primarily stored in a multi-channel image, they can be
easily converted to an image array using image_to_channels. As an alternative, the image array can be
created using concat_obj.
photometric_stereo relies on the evaluation of the "photometric information", i.e., the gray values
stored in the images. Therefore, this information should be unbiased and accurate. We recommend to en-
sure that the camera that is used to acquire the images has a linear characteristic. You can use the opera-
tor radiometric_self_calibration to determine the characteristic of your camera and the operator
lut_trans to correct the gray value information in case of a non linear characteristic. Additionally, if accu-
rate measurements are required, we recommend to utilize the full dynamic range of the camera since this leads to
more accurate gray value information. For the same reason, using images with a bit depth higher than 8 (e.g., uint2
images instead of byte images) leads to a better accuracy.
The domain of definition of the input images determines which algorithm is used internally to process the Images.
Three algorithms are available:

• If all images have a full domain, the fastest algorithm is used. This mode is recommended for most applica-
tions.
• If the input images share the same reduced domain of definition, only the pixels within the domain are
processed. This mode can be used to exclude areas of the object from all images. Typically, areas are
excluded that are known to show non-Lambertian reflectance characteristics or that are of no interest, e.g.,
holes in the surface.
• If images with distinct domains of definition are provided, only the gray values that are contained in the
domains are used in the respective images. Then, only those pixels are processed that have independent slant
and tilt angles in at least three images. This mode is suitable, e.g., to exclude specific regions of individual
images from the processing. These can be, e.g., areas of the object for which is known that they show non-
Lambertian reflectance characteristics or regions for which is known that they contain biased photometric
information, e.g., shadows. To exclude such regions leads to more accurate results. Please note that this last
mode requires significantly more processing time than the modes that use the full domain or the same domain
for all images.

Output images
The operator can return the images for the reconstructed Gradient, Albedo, and the HeightField of the
surface:

• The Gradient image is a vector field that contains the partial derivative of the surface. Note that
Gradient can be used as input to reconstruct_height_field_from_gradient. For visu-
alization purposes, instead of the surface gradients normalized surface normals can be returned. Then,
ResultType must be set to ’normalized_surface_normal’ (legacy: ’normalized_gradient’) instead of ’gra-
dient’. Here, the row and column components represent the row and column components of the normal-
ized surface normal. If ResultType is set to ’all’, the default mode, i.e., ’gradient’ and not ’normal-
ized_surface_normal’ is used.
• The Albedo image describes the ratio of reflected radiation to incident radiation and has a value between one
(white surface) and zero (black surface). Thus, the albedo is a characteristic of the surface. For example, for
a printed surface it corresponds to the print image exclusive of any influences of the incident light (shading).
• The HeightField image is an image in which the pixel values correspond to a relative height.

By default, all of these iconic objects are returned, i.e., the parameter ResultType is set to ’all’. In case
that only some of these results are needed, the parameter ResultType can be set to a tuple specifying only
the required results among the values ’gradient’, ’albedo’, and ’height_field’. Note that in certain applications
like surface inspection tasks only the Gradient or Albedo images are required. Here, one can significantly

HALCON 24.11.1.0
338 CHAPTER 5 3D RECONSTRUCTION

increase the processing speed by not reconstructing the surface, i.e., by passing only ’gradient’ and ’albedo’ but
not ’height_field’ to ResultType.
Note that internally photometric_stereo first determines the gradient values and, if required, integrates
these values in order to obtain the height field. This integration is performed by the same algorithms that are
provided by the operator reconstruct_height_field_from_gradient and that can be controlled by
the parameters ReconstructionMethod, GenParamName, and GenParamValue. Please, refer to the
operator reconstruct_height_field_from_gradient for more information on these parameters. If
ResultType is set such that ’height_field’ is not one of the results, the parameters ReconstructionMethod,
GenParamName, and GenParamValue are ignored.
Attention
Note that photometric_stereo assumes square pixels. Additionally, it assumes that the heights are computed
on a lattice with step width 1 in object space. If this is not the case, i.e., if the pixel size of the camera projected
into the object space differs from 1, the returned height values must be multiplied by the actual step width (value
of the pixel size projected into the object space). The size of the pixel in object space is computed by dividing the
size of the pixel in the camera by the magnification of the (telecentric) lens.
Parameters

. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2


Array with at least three input images with different directions of illumination.
. HeightField (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : real
Reconstructed height field.
. Gradient (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : vector_field
The gradient field of the surface.
. Albedo (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : real
The albedo of the surface.
. Slants (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg-array ; real / integer
Angle between the camera and the direction of illumination (in degrees).
Default: 45.0
Suggested values: Slants ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Slants ≤ 180.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Tilts (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg-array ; real / integer
Angle of the direction of illumination within the object plane (in degrees).
Default: 45.0
Suggested values: Tilts ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Tilts ≤ 360.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Types of the requested results.
Default: ’all’
List of values: ResultType ∈ {[], ’all’, ’height_field’, ’gradient’, ’normalized_surface_normal’, ’albedo’}
. ReconstructionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the reconstruction method.
Default: ’poisson’
List of values: ReconstructionMethod ∈ {’fft_cyclic’, ’rft_cyclic’, ’poisson’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’optimize_speed’, ’caching’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer / real / string
Values of the generic parameters.
Default: []
List of values: GenParamValue ∈ {’standard’, ’patient’, ’exhaustive’, ’use_cache’, ’no_cache’,
’free_cache’}

HALCON/HDevelop Reference Manual, 2024-11-13


5.4. PHOTOMETRIC STEREO 339

Result
If the parameters are valid, photometric_stereo returns the value 2 (H_MSG_TRUE). If necessary, an ex-
ception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
optimize_fft_speed
Module
3D Metrology

reconstruct_height_field_from_gradient (
Gradient : HeightField : ReconstructionMethod, GenParamName,
GenParamValue : )

Reconstruct a surface from surface gradients.


reconstruct_height_field_from_gradient reconstructs a surface from the surface gradients that are
given in Gradient. The surface is returned as a height field, i.e., an image in which the gray value of each image
point corresponds to a relative height.
The reconstruction is done by integrating the gradients by different algorithms that can be selected in the parameter
ReconstructionMethod. Because gradient fields are typically non-integrable due to noise, the various algo-
rithms return a solution in a least-squares sense. The algorithms differ in the way how they model the boundary
condition. Currently three algorithms are supported: ’fft_cyclic’, ’rft_cyclic’ and ’poisson’.
Reconstruction with Fast Fourier transforms
The variants ’fft_cyclic’ and ’rft_cyclic’ assume that the image function is cyclic at the boundaries. Note that
due to the assumed cyclic image function artifacts may occur at the image boundaries. Thus, in most cases, we
recommend to use the ’poisson’ algorithm instead.
The difference between ’fft_cyclic’ and ’rft_cyclic’ is that the rft version has faster processing times and re-
quires less memory than the fft version. While theoretically fft and rft should return the same result, the fft
version is numerically slightly more accurate. As reconstruct_height_field_from_gradient in-
ternally uses a fast Fourier transform, the run time of the operator can be influenced by a previous call to
optimize_fft_speed or optimize_rft_speed, respectively.
Reconstruction according to Poisson
The ’poisson’ algorithm assumes that the image has constant gradients at the image border. In most cases,
it is the recommended reconstruction method for reconstruct_height_field_from_gradient. Its
run time can only be optimized by setting GenParamName to ’optimize_speed’ and GenParamValue to
’standard’, ’patient’, or ’exhaustive’. These parameters are described in more detail with the description of
optimize_fft_speed.
Note that by default, the ’poisson’ algorithm uses a cache that depends on the image size and that speeds up the
reconstruction significantly, provided that all images have the same size. The cache is allocated at the first time
when the ’poisson’ algorithm is called. Therefore the first call always takes longer than subsequent calls. The
additionally needed memory corresponds to the memory needed for the specific size of one image. Please note
that when calling the operator with different image sizes, the cache needs to be reallocated, which leads to a longer
processing time. In this case it may be preferable to not use the cache. To switch off the caching, you must set
the parameter GenParamName to ’caching’ and the parameter GenParamValue to ’no_cache’. The cache
can explicitly be deallocated by setting GenParamName to ’caching’ and GenParamValue to ’free_cache’.
However, in the majority of cases, we recommend to use the cache, i.e., to use the default setting for the parameter
’caching’.
Saving and loading optimization parameters

HALCON 24.11.1.0
340 CHAPTER 5 3D RECONSTRUCTION

The optimization parameters for all algorithms can be saved and loaded by
write_fft_optimization_data and read_fft_optimization_data.
Non obvious applications
Please note that the operator reconstruct_height_field_from_gradient has various non-obvious
applications, especially in the field called gradient domain manipulation technique. In many applications, the
gradient values that are passed as input to the operator do not have the semantics of surface gradients (i.e., the
first derivatives of the height values), but are rather the first derivatives of other kinds of parameters, typically
gray values (then, the gradients have the semantics of gray value edges). When processing these gradient images
by various means, e.g., by adding or subtracting images, or by a filtering, the original gradient values are altered
and the subsequent call to reconstruct_height_field_from_gradient delivers a modified image, in
which, e.g., unwanted edges are removed or the contrast has been changed locally. Typical applications are noise
removal, seamless fusion of images, or high dynamic range compression.
Attention
reconstruct_height_field_from_gradient takes into account the values of all pixels in Gradient,
not only the values within its domain. If Gradient does not have a full domain, one could cut out the relevant
square part of the gradient field and generate a smaller image with full domain.
Parameters
. Gradient (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : vector_field
The gradient field of the image.
. HeightField (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : real
Reconstructed height field.
. ReconstructionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the reconstruction method.
Default: ’poisson’
List of values: ReconstructionMethod ∈ {’fft_cyclic’, ’rft_cyclic’, ’poisson’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’optimize_speed’, ’caching’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer / real / string
Values of the generic parameters.
Default: []
List of values: GenParamValue ∈ {’standard’, ’patient’, ’exhaustive’, ’use_cache’, ’no_cache’,
’free_cache’}
Result
If the parameters are valid reconstruct_height_field_from_gradient returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

References
M. Kazhdan, M. Bolitho, and H. Hoppe: “Poisson Surface Reconstruction.” Symposium on Geometry Processing
(June 2006).
Module
3D Metrology

sfs_mod_lr ( Image : Height : Slant, Tilt, Albedo, Ambient : )

Reconstruct a surface from a gray value image.

HALCON/HDevelop Reference Manual, 2024-11-13


5.4. PHOTOMETRIC STEREO 341

sfs_mod_lr reconstructs a surface (i.e. the relative height of each image point) using the modified algorithm of
Lee and Rosenfeld. The surface is reconstructed from the input image Image, and the light source given by the
parameters Slant, Tilt, Albedo and Ambient, and is assumed to lie infinitely far away in the direction given
by Slant and Tilt. The parameter Albedo determines the albedo of the surface, i.e. the percentage of light
reflected in all directions. Ambient determines the amount of ambient light falling onto the surface. It can be set
to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment the
image was taken.
Attention
sfs_mod_lr assumes that the heights are to be extracted on a lattice with step width 1. If this is not the case, the
calculated heights must be multiplied with the step width after the call to sfs_mod_lr. A Cartesian coordinate
system with the origin in the lower left corner of the image is used internally. sfs_mod_lr can only handle
byte-images.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the positive z-axis (in degrees).
Default: 45.0
Suggested values: Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default: 45.0
Suggested values: Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of light reflected by the surface.
Default: 1.0
Suggested values: Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Value range: 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Albedo >= 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of ambient light.
Default: 0.0
Suggested values: Ambient ∈ {0.1, 0.5, 1.0}
Value range: 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Ambient >= 0.0
Result
If all parameters are correct sfs_mod_lr returns the value 2 (H_MSG_TRUE). Otherwise, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.

HALCON 24.11.1.0
342 CHAPTER 5 3D RECONSTRUCTION

Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc, optimize_fft_speed
Possible Successors
shade_height_field
Module
3D Metrology

sfs_orig_lr ( Image : Height : Slant, Tilt, Albedo, Ambient : )

Reconstruct a surface from a gray value image.


sfs_orig_lr reconstructs a surface (i.e. the relative height of each image point) using the original algorithm of
Lee and Rosenfeld. The surface is reconstructed from the input image Image. The light source is to be given by
the parameters Slant, Tilt, Albedo and Ambient, and is assumed to lie infinitely far away in the direction
given by Slant and Tilt. The parameter Albedo determines the albedo of the surface, i.e. the percentage of
light reflected in all directions. Ambient determines the amount of ambient light falling onto the surface. It can
be set to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment
the image was taken.
Attention
sfs_orig_lr assumes that the heights are to be extracted on a lattice with step width 1. If this is not the case, the
calculated heights must be multiplied with the step width after the call to sfs_orig_lr. A Cartesian coordinate
system with the origin in the lower left corner of the image is used internally. sfs_orig_lr can only handle
byte-images.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the positive z-axis (in degrees).
Default: 45.0
Suggested values: Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Slant ≤ 90.0
Minimum increment: 0.01
Recommended increment: 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default: 45.0
Suggested values: Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Tilt ≤ 360.0
Minimum increment: 0.01
Recommended increment: 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of light reflected by the surface.
Default: 1.0
Suggested values: Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Value range: 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Albedo >= 0.0

HALCON/HDevelop Reference Manual, 2024-11-13


5.4. PHOTOMETRIC STEREO 343

. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer


Amount of ambient light.
Default: 0.0
Suggested values: Ambient ∈ {0.1, 0.5, 1.0}
Value range: 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Ambient >= 0.0
Result
If all parameters are correct sfs_orig_lr returns the value 2 (H_MSG_TRUE). Otherwise, an exception is
raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc, optimize_fft_speed
Possible Successors
shade_height_field
Module
3D Metrology

sfs_pentland ( Image : Height : Slant, Tilt, Albedo, Ambient : )

Reconstruct a surface from a gray value image.


sfs_pentland reconstructs a surface (i.e. the relative height of each image point) using the algorithm of
Pentland. The surface is reconstructed from the input image Image. The light source must be given by the
parameters Slant, Tilt, Albedo and Ambient, and is assumed to lie infinitely far away in the direction given
by Slant and Tilt. The parameter Albedo determines the albedo of the surface, i.e. the percentage of light
reflected in all directions. Ambient determines the amount of ambient light falling onto the surface. It can be set
to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment the
image was taken.
Attention
sfs_pentland assumes that the heights are to be extracted on a lattice with step width 1. If this is not the
case, the calculated heights must be multiplied with the step width after the call to sfs_pentland. A Cartesian
coordinate system with the origin in the lower left corner of the image is used internally. sfs_pentland can
only handle byte-images.
Parameters

. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte


Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the positive z-axis (in degrees).
Default: 45.0
Suggested values: Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0

HALCON 24.11.1.0
344 CHAPTER 5 3D RECONSTRUCTION

. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer


Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default: 45.0
Suggested values: Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of light reflected by the surface.
Default: 1.0
Suggested values: Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Value range: 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Albedo >= 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of ambient light.
Default: 0.0
Suggested values: Ambient ∈ {0.1, 0.5, 1.0}
Value range: 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Ambient >= 0.0
Result
If all parameters are correct sfs_pentland returns the value 2 (H_MSG_TRUE). Otherwise, an exception is
raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.

Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc, optimize_fft_speed
Possible Successors
shade_height_field
Module
3D Metrology

shade_height_field ( ImageHeight : ImageShade : Slant, Tilt,


Albedo, Ambient, Shadows : )

Shade a height field.


shade_height_field computes a shaded image from the height field ImageHeight as if the image were
illuminated by an infinitely far away light source. It is assumed that the surface described by the height field has
Lambertian reflection properties determined by Albedo and Ambient. The parameter Shadows determines
whether shadows are to be calculated.
Attention
shade_height_field assumes that the heights are given on a lattice with step width 1. If this is not the
case, the heights must be divided by the step width before the call to shade_height_field. Otherwise, the
derivatives used internally to compute the orientation of the surface will be estimated to steep or too flat. Example:
The height field is given on 100*100 points on the square [0,1]*[0,1]. Then the heights must be divided by 1/100
first. A Cartesian coordinate system with the origin in the lower left corner of the image is used internally.

HALCON/HDevelop Reference Manual, 2024-11-13


5.4. PHOTOMETRIC STEREO 345

Parameters
. ImageHeight (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / int4 / real
Height field to be shaded.
. ImageShade (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : byte
Shaded image.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the positive z-axis (in degrees).
Default: 0.0
Suggested values: Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default: 0.0
Suggested values: Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of light reflected by the surface.
Default: 1.0
Suggested values: Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Value range: 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Albedo >= 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of ambient light.
Default: 0.0
Suggested values: Ambient ∈ {0.1, 0.5, 1.0}
Value range: 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Ambient >= 0.0
. Shadows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should shadows be calculated?
Default: ’false’
Suggested values: Shadows ∈ {’true’, ’false’}
Result
If all parameters are correct shade_height_field returns the value 2 (H_MSG_TRUE). Otherwise, an excep-
tion is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Predecessors
sfs_mod_lr, sfs_orig_lr, sfs_pentland, photometric_stereo
Module
Foundation

HALCON 24.11.1.0
346 CHAPTER 5 3D RECONSTRUCTION

uncalibrated_photometric_stereo ( Images : NormalField, Gradient,


Albedo : ResultType : )

Reconstruct a surface from several, differently illuminated images.


uncalibrated_photometric_stereo can be used to extract high-frequency surface details from a given
object with no prior knowledge about the illumination, geometry and reflectance of the object. The geometry of
interest can be for example dents, folds or scratches. The operator can usually not be used for measuring the
overall shape of an object. The operator returns the normals NormalField of the surface as a 3-channel image
with each image encoding a component of the normal. This is used as a visualization of the result as a color coded
image. Further, it returns the Gradient and the Albedo of the surface. Which result should be calculated can be
controlled with ResultType. This operator is related to photometric_stereo, but does not require known
(i.e. previously calibrated) light directions. Note that photometric_stereo is faster and more accurate, but
needs the light direction information. For sensible results an orthographic projection of the camera is assumed for
both the calibrated and uncalibrated case. This is typically reached by using a telecentric lens or at least a lens with
a long focal distance.
The operator requires at least three images of the same object, taken with a static, non-moving camera and different
lighting directions for each image. For best results, the object should exhibit Lambertian reflection properties, no
inter-reflection or shadow castings.
Parameters
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2
The input images with different illumination.
. NormalField (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real
The normal field of the surface.
. Gradient (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : vector_field
The gradient field of the surface .
. Albedo (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : real
The albedo of the surface.
. ResultType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
The result type.
Default: ’all’
List of values: ResultType ∈ {[], ’all’, ’normal_field’, ’gradient’, ’normalized_gradient’, ’albedo’}
Example

* read severally illuminated images


FName := 'photometric_stereo/pharma_braille_0' + [1:4] + '.png'
read_image(Images, FName)
* extract surface normals, gradients and albedo from images
uncalibrated_photometric_stereo(Images, NormalField, Gradient, Albedo, 'all')
derivate_vector_field (Gradient, Result, 0.1, 'mean_curvature')
reconstruct_height_field_from_gradient (Gradient, HeightField, 'poisson', \
[], [])

Result
The operator uncalibrated_photometric_stereo returns the NormalField for the given images as
well as the appropriate gradients for each pixel and the Albedo of the object.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Alternatives
photometric_stereo
See also
photometric_stereo

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 347

References
H. Hayakawa: “Photometric stereo under a light source with arbitrary motion”. Journal Optical Society America,
Vol. 11, No. 11/November 1994.
Module
3D Metrology

5.5 Sheet of Light

apply_sheet_of_light_calibration (
Disparity : : SheetOfLightModelID : )

Apply the calibration transformations to the input disparity image.


The operator apply_sheet_of_light_calibration reads the disparity image Disparity, stores
it to the sheet-of-light model specified by SheetOfLightModelID and applies the calibration
transformation to this image, in order to compute the calibrated coordinates of the reconstructed
3D surface points. The resulting calibrated coordinates can be retrieved from the model by
using the operator get_sheet_of_light_result. The corresponding 3D object model can
be retrieved with get_sheet_of_light_result_object_model_3d. Note that prior to the
next call of apply_sheet_of_light_calibration for a disparity image of smaller height,
reset_sheet_of_light_model should be called.
The disparity image Disparity may have been acquired previously by using the operator
measure_profile_sheet_of_light or by an image acquisition device, which directly provides
disparity values and works according to the sheet-of-light technique.
In order to compute the calibrated coordinates, the parameters listed below must have been set for the sheet-of-light
model with the help of the operator set_sheet_of_light_param:

’calibration’: extent of the calibration transformation which shall be applied to the disparity image. ’calibration’
must be set to ’xz’, ’xyz’ or ’offset_scale’. Refer to set_sheet_of_light_param for details on this
parameter.
’camera_parameter’: the internal parameters of the camera used for the measurement. This pose is required when
the calibration extent has been set to ’xyz’ or ’xz’.
’camera_pose’: the pose of the world coordinate system relative to the camera coordinate system. This pose is
required when the calibration extent has been set to ’xyz’ or ’xz’.
’lightplane_pose’: the pose of the light-plane coordinate system relative to the world coordinate system. The
light-plane coordinate system must be chosen so that its plane z=0 coincides with the light plane described
by the light line projector. This pose is required when the calibration extent has been set to ’xyz’ or ’xz’.
’movement_pose’: a pose representing the movement of the object between two successive profile images with re-
spect to the measurement system built by the camera and the laser. This pose is required when the calibration
extent has been set to ’xyz’. It is ignored when the calibration extent has been set to ’xz’.
’scale’: with this parameter you can scale the 3D coordinates X, Y and Z that result when applying the calibration
transformations to the disparity image. ’scale’ must be specified as the ratio desired unit/original unit. The
original unit is determined by the coordinates of the calibration object. If the original unit is meters (which is
the case if you use the standard calibration plate), you can set the desired unit directly by selecting ’m’, ’cm’,
’mm’ or ’um’ for the parameter Scale. By default, ’scale’ is set to 1.0.

Parameters
. Disparity (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Height or range image to be calibrated.
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
Example

HALCON 24.11.1.0
348 CHAPTER 5 3D RECONSTRUCTION

* ...
* Read an already acquired disparity map from file
read_image (Disparity, 'sheet_of_light/connection_rod_disparity.tif')
*
* Create a model and set the required parameters
gen_rectangle1 (ProfileRegion, 120, 75, 195, 710)
create_sheet_of_light_model (ProfileRegion, ['min_gray','num_profiles', \
'ambiguity_solving'], [70,290,'first'], \
SheetOfLightModelID)
set_sheet_of_light_param (SheetOfLightModelID, 'calibration', 'xyz')
set_sheet_of_light_param (SheetOfLightModelID, 'scale', 'mm')
set_sheet_of_light_param (SheetOfLightModelID, 'camera_parameter', \
CameraParameter)
set_sheet_of_light_param (SheetOfLightModelID, 'camera_pose', CameraPose)
set_sheet_of_light_param (SheetOfLightModelID, 'lightplane_pose', \
LightPlanePose)
set_sheet_of_light_param (SheetOfLightModelID, 'movement_pose', \
MovementPose)
*
* Apply the calibration transforms and
* get the resulting calibrated coordinates
apply_sheet_of_light_calibration (Disparity, SheetOfLightModelID)
get_sheet_of_light_result (X, SheetOfLightModelID, 'x')
get_sheet_of_light_result (Y, SheetOfLightModelID, 'y')
get_sheet_of_light_result (Z, SheetOfLightModelID, 'z')
*

Result
The operator apply_sheet_of_light_calibration returns the value 2 (H_MSG_TRUE) if the given pa-
rameters are correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SheetOfLightModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
get_sheet_of_light_result, get_sheet_of_light_result_object_model_3d
Module
3D Metrology

calibrate_sheet_of_light ( : : SheetOfLightModelID : Error )

Calibrate a sheet-of-light setup with a 3D calibration object.


calibrate_sheet_of_light calibrates the sheet-of-light setup SheetOfLightModelID from one dis-
parity image of a 3D calibration object and returns the back projection error of the optimization in Error.
Overview
The calibration of a sheet-of-light setup with calibrate_sheet_of_light is simpler than the calibration
of a sheet-of-light setup with standard HALCON calibration plates, which is shown in the HDevelop example

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 349

calibrate_sheet_of_light_calplate.hdev. It is only necessary to obtain one uncalibrated recon-


struction, i.e., a disparity image, of a special 3D calibration object to calibrate the sheet-of-light model.
In the following, the steps that are necessary for the calibration are described.

Calibration of a sheet-of-light setup

Supply of a 3D calibration object


A special 3D calibration object must be provided. This calibration object must correspond to the CAD model
created with create_sheet_of_light_calib_object. The 3D calibration object has an inclined plane
on which a truncated pyramid is located. It has a thinner side, which is hereinafter referred to as front side. The
thicker side is referred to as back side of the calibration object.
The dimensions of the calibration object should be chosen such that the calibration object covers the com-
plete measuring volume. Be aware, that only parts on the 3D calibration object above HeightMin (see
create_sheet_of_light_calib_object) are taken into account.
The CAD model, which is written as a DXF file, also serves as description file of the calibration object.
Preparation of the sheet-of-light model
To prepare a sheet-of-light model for the calibration, the following steps must be performed.

• Create a sheet-of-light model with create_sheet_of_light_model and adapt the default parameters
to your specific measurement task.
• Set the initial parameters of the camera with set_sheet_of_light_param. So far, only pinhole cam-
eras with the division model are supported, i.e., only cameras of type ’area_scan_division’.
• Set the description file of the calibration object (created with create_sheet_of_light_calib_object)
with set_sheet_of_light_param.

Uncalibrated reconstruction of the 3D calibration object


The 3D calibration object must be reconstructed with the (uncalibrated) sheet-of-light model prepared above, i.e.,
a disparity image of the 3D calibration object must be created.

HALCON 24.11.1.0
350 CHAPTER 5 3D RECONSTRUCTION

Disparity image of a calibration object

For this, the calibration object must be oriented such that either its front side or its back side intersect the
light plane first (i.e., the movement vector should be parallel to the Y axis of the calibration object, see
create_sheet_of_light_calib_object). As far as possible, the domain of the disparity image of the
calibration object should be restricted to the calibration object. Besides, the domain of the disparity image should
have no holes on the truncated pyramid. All four sides of the truncated pyramid must be clearly visible.
Calibration of the sheet-of-light setup
The calibration is then performed with calibrate_sheet_of_light. The returned Error is the RMS of
the distance of the reconstructed points to the calibration object in meters.
For sheet-of-light models calibrated with calibrate_sheet_of_light, in rare cases the parameters might
yield an unrealistic setup. However, the quality of measurements performed with the calibrated parameters is not
affected.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Average back projection error of the optimization.
Example

* Calibrate a sheet-of-light model with a 3D calibration object


gen_rectangle1 (Rectangle, 300, 0, 800, 1023)
CameraParam := ['area_scan_division', 0.016, 0, 4.65e-6, 4.65e-6, \
640.0, 512.0, 1280, 1024]
create_sheet_of_light_model (Rectangle, 'min_gray', 50, SheetOfLightModelID)
set_sheet_of_light_param (SheetOfLightModelID, 'camera_parameter', \
CameraParam)
set_sheet_of_light_param (SheetOfLightModelID, 'calibration_object', \
'calib_object.dxf')
* Uncalibrated reconstruction of the calibration object
for ProfileIndex := 1 to 1000 by 1
grab_image_async (Image, AcqHandle, -1)
measure_profile_sheet_of_light (Image, SheetOfLightModelID, [])
endfor
* Calibration of the sheet-of-light-model
calibrate_sheet_of_light (SheetOfLightModelID, Error)
* Now get a calibrated reconstruction of the calibration object
get_sheet_of_light_result_object_model_3d (SheetOfLightModelID, \
ObjectModel3D)

Result
The operator calibrate_sheet_of_light returns the value 2 (H_MSG_TRUE) if the calibration was suc-
cessful. Otherwise, an exception will be raised.
Execution Information

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 351

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator modifies the state of the following input parameter:
• SheetOfLightModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_sheet_of_light_model, set_sheet_of_light_param,
set_profile_sheet_of_light, measure_profile_sheet_of_light
Possible Successors
set_profile_sheet_of_light, apply_sheet_of_light_calibration
Module
3D Metrology

clear_sheet_of_light_model ( : : SheetOfLightModelID : )

Delete a sheet-of-light model and free the allocated memory.


The operator clear_sheet_of_light_model deletes a sheet-of-light model that was created by
create_sheet_of_light_model. All memory used by the model is freed. The handle of the model is
passed in SheetOfLightModelID. After the operator call it is invalid.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
Result
The operator clear_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if a valid handle is passed
and the referred sheet-of-light model can be freed correctly. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SheetOfLightModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
See also
create_sheet_of_light_model
Module
3D Metrology

create_sheet_of_light_calib_object ( : : Width, Length,


HeightMin, HeightMax, FileName : )

Create a calibration object for sheet-of-light calibration.


create_sheet_of_light_calib_object creates a CAD model of a calibration object for sheet-of-light
calibration with calibrate_sheet_of_light and stores it in FileName.

HALCON 24.11.1.0
352 CHAPTER 5 3D RECONSTRUCTION

Width

HeightMin Z HeightMax

X Length

A calibration object for sheet-of-light calibration

The calibration object consists of a ramp with a truncated pyramid rotated by 45 degrees. The calibration object
contains an orientation mark in the form of a circular hole. The dimensions of the calibration target in Width,
Length, HeightMin, and HeightMax must be given in meters. Length must be at least 10% larger than
Width. The Z coordinate of the highest point on the truncated pyramid is at most HeightMax. The calibration
object might not be found by calibrate_sheet_of_light if the height difference between the truncated
pyramid and the ramp is too small. In this case, adjust HeightMin and HeightMax accordingly or increase the
sampling rate when acquiring the calibration data.
The dimensions of the calibration object should be chosen such that it is possible to cover the measuring volume of
the sheet-of-light setup. In addition, when selecting the Length of the calibration object, the speed of the sheet-
of-light setup should be considered such that the calibration object is sampled with enough profile measurements.

b
h
t

HeightMax Øc
d
α
HeightMin

2⋅c
2⋅c 0.5⋅Width
Length
Width

Technical drawing of the calibration object, where c is the diameter of the orientation mark, d is the distance of
the pyramid from the front of the calibration object, h is the height of the truncated pyramid, b is the length of the
diagonal of the pyramid at the bottom, t is the corresponding length at the top, and α is the angle of the ramp as
seen in the drawing. You can calculate these dimensions with the procedure
get_sheet_of_light_calib_object_dimensions.

Set the parameter ’calibration_object’ to FileName with set_sheet_of_light_param to use the gener-
ated calibration object in a subsequent call to calibrate_sheet_of_light.
Note that MVTec does not offer 3D calibration objects. Instead, use
create_sheet_of_light_calib_object to generate a customized CAD model of a calibration
object. This CAD model can then be used to produce the calibration object. Milled aluminum is an established
material for this. However, depending on the required precision, its thermal stability may be a problem. Note that
the surface should be bright. Its color may have to be adjusted depending on the color of the laser to provide a

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 353

sufficient contrast to the color of the laser. Additionally, the surface must not be translucent nor reflective. To
achieve this, you can anodize or lacquer it. Please note that when lacquering it, the accuracy might be decreased
due to the applied paintwork. However, a surface that is too rough leads to a decreasing precision as well. It is
advisable to have the produced calibration object remeasured to determine whether the required accuracy can
be achieved. The accuracy of the calibration object should be ten times higher than the required accuracy of
measurement. After having the object measured, the results can be manually inserted into the DXF file that can
then be used for the calibration with calibrate_sheet_of_light.
Parameters
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Width of the object.
Default: 0.1
. Length (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Length of the object.
Default: 0.15
. HeightMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum height of the ramp.
Default: 0.005
. HeightMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Maximum height of the ramp.
Default: 0.04
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Filename of the model of the calibration object.
Default: ’calib_object.dxf’
File extension: .dxf
Result
The operator create_sheet_of_light_calib_object returns the value 2 (H_MSG_TRUE) if the given
parameters are correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
3D Metrology

create_sheet_of_light_model ( ProfileRegion : : GenParamName,


GenParamValue : SheetOfLightModelID )

Create a model to perform 3D-measurements using the sheet-of-light technique.


The operator create_sheet_of_light_model creates a model to perform 3D-Measurements using the
sheet-of-light technique.
The sheet-of-light technique performs a three-dimensional reconstruction of the surface of an opaque and diffuse
reflecting solid by using an area scan camera and a light line projector (typically a laser line projector). The camera
and the line projector must be mounted so that their main axis form an angle of triangulation. The value of the
angle of triangulation is typically chosen between 30° and 60°. The projected light line defines a plane in space.
This plane intersects the surface of the solid under measurement and builds a profile of the surface visible for the
camera. By moving the solid in front of the measurement system (i.e., the combination of the camera and the
line projector), it is possible to record the whole surface of the solid. As an alternative, the measurement system
can also be moved over the surface under measurement. Please note that the profiles must be oriented roughly
horizontal in the profile images, because they are processed column by column.
If geometrical information about the measurement setup is available, it is possible to compute true three-
dimensional coordinates of the reconstructed surface. For an overview of the required geometrical (i.e., calibration)

HALCON 24.11.1.0
354 CHAPTER 5 3D RECONSTRUCTION

information, refer to the operator set_sheet_of_light_param. If such information is not available, the re-
sult of the measurement is a disparity image, where each pixel holds a record of the subpixel precise position of
the detected profile.
The operator returns a handle to the sheet-of-light model in SheetOfLightModelID, which is used for all fur-
ther operations on the sheet-of-light model, like modifying parameters of the model, measuring profiles, applying
calibration transformations or accessing the results of measurements.
Mandatory input iconic parameters
In order to perform measurements, you will have to set the following input iconic parameter:

ProfileRegion: defines the region of the profile images, which will be processed by the operator
measure_profile_sheet_of_light. This region should be rectangular and can be generated e.g.,
by using the operator gen_rectangle1. If the region passed to ProfileRegion is not rectangular, its
smallest enclosing rectangle (bounding box) will be used. Note that ProfileRegion is only taken into
account by the operator measure_profile_sheet_of_light and is ignored when disparity images
are processed.

Default settings of the sheet-of-light model parameters


The default settings of the sheet-of-light model were chosen to perform non-calibrated measurements in a ba-
sic configuration. The following list provides an overview of the parameter values used by default (refer to
set_sheet_of_light_param for a detailed description of all supported generic parameters):

’method’ is set to ’center_of_gravity’


’min_gray’: is set to 100
’num_profiles’ is set to 512
’ambiguity_solving’ is set to ’first’
’score_type’ is set to ’none’
’calibration’ is set to ’none’

Modify the sheet-of-light model parameters


We recommend to adapt the default parameters to your specific measurement task, in order to enhance the quality
of the measurement or to shorten the runtime. You will also have to modify the default values of the model
parameters if you need calibrated results.
create_sheet_of_light_model provides the generic parameters GenParamName and
GenParamValue to modify the default value of most of the model parameters. Note that model parame-
ters can also be set by using the operator set_sheet_of_light_param. Nevertheless, with this second
operator only one parameter can be set at the same time, whereas it is possible to set more than one parameter at
the time with create_sheet_of_light_model. Refer to set_sheet_of_light_param for a detailed
description of all supported generic parameters.
Please note that the following model parameters can not be set with the opera-
tor create_sheet_of_light_model, and thus have to be set with the operator
set_sheet_of_light_param: ’camera_parameter’, ’camera_pose’, ’lightplane_pose’, and ’move-
ment_pose’.
It is possible to query the value of the model parameters with the operator get_sheet_of_light_param. The
names of all supported model parameters are returned by the operator query_sheet_of_light_params.
Use the simplified sheet-of-light model parameters
In case of a simple setup or if not a real metric calibration is necessary, the transformation of the observed dispari-
ties into 3D values can be controlled using a simplified parameter set of the sheet-of-light model:
By setting the calibration with the set_sheet_of_light_param to ’offset_scale’, the poses and camera
parameter are changed to such values, that an offset of one pixel corresponds to one unit in the 3D result. This
allows to create a 3D object model and 3D images from an uncalibrated sheet-of-light model.
The transformation from disparity to 3D coordinates can be controlled by six parameters: ’scale_x’, ’scale_y’,
’scale_z’, ’offset_x’, ’offset_y’, ’offset_z’. Refer to set_sheet_of_light_param for a detailed description
of all supported generic parameters.
Use of a handle in multiple threads

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 355

Please note that you have to take special care when using a handle of a sheet-of-light-model
SheetOfLightModelID in multiple threads. One and the same handle cannot be used concurrently in dif-
ferent threads if they modify the handle. Thus, you have to be careful especially if the threads call operators that
change the data of the handle. You can find an according hint in the ’Attention’ section of the operators. Anyway,
if you still want to use the same handle in operators that concurrently write into the handle in different threads you
have to synchronize the threads to assure that they do not access the same handle simultaneously. If you are not sure
if the usage of the same handle is thread-safe, please see the ’Attention’ section of the respective reference manual
entry if it contains a warning pointing to this problem. However, different handles can be used independently and
safely in different threads.
Parameters
. ProfileRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; object
Region of the images containing the profiles to be processed. If the provided region is not rectangular, its
smallest enclosing rectangle will be used.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters that can be adjusted for the sheet-of-light model.
Default: ’min_gray’
List of values: GenParamName ∈ {’min_gray’, ’method’, ’ambiguity_solving’, ’score_type’,
’num_profiles’, ’calibration’, ’scale’, ’scale_x’, ’scale_y’, ’scale_z’, ’offset_x’, ’offset_y’, ’offset_z’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; integer / real / string
Values of the generic parameters that can be adjusted for the sheet-of-light model.
Default: 50
Suggested values: GenParamValue ∈ {’default’, ’center_of_gravity’, ’last’, ’first’, ’brightest’, ’none’,
’intensity’, ’width’, ’offset_scale’, 50, 100, 150, 180}
. SheetOfLightModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle for using and accessing the sheet-of-light model.
Example

* Create the rectangular region in which the profiles are measured.


gen_rectangle1 (ProfileRegion, 120, 75, 195, 710)
*
* Create a model in order to measure profiles according to
* the sheet-of-light technique. Simultaneously set some
* parameters for the model.
create_sheet_of_light_model (ProfileRegion, ['min_gray','num_profiles', \
'ambiguity_solving','score_type'], \
[70,290,'first','width'], \
SheetOfLightModelID)
*
* Measure the profile from successive images
for Index := 1 to 290 by 1
read_image (ProfileImage, 'sheet_of_light/connection_rod_'+Index$'.3')
dev_display (ProfileImage)
dev_display (ProfileRegion)
measure_profile_sheet_of_light (ProfileImage, SheetOfLightModelID, [])
endfor
*
* Get the resulting disparity and score images
get_sheet_of_light_result (Disparity, SheetOfLightModelID, 'disparity')
get_sheet_of_light_result (Score, SheetOfLightModelID, 'score')
*
* Close the sheet-of-light handle once the measurement
* has been performed

Result
The operator create_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Execution Information

HALCON 24.11.1.0
356 CHAPTER 5 3D RECONSTRUCTION

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
gen_rectangle1
Possible Successors
set_sheet_of_light_param, measure_profile_sheet_of_light
See also
clear_sheet_of_light_model, calibrate_sheet_of_light
Module
3D Metrology

deserialize_sheet_of_light_model (
: : SerializedItemHandle : SheetOfLightModelID )

Deserialize a sheet-of-light model.


deserialize_sheet_of_light_model deserializes a sheet-of-light model that was serialized by
serialize_sheet_of_light_model (see fwrite_serialized_item for an introduction of the ba-
sic principle of serialization). The serialized model is defined by the handle SerializedItemHandle. The
deserialized values are stored in a new sheet-of-light model with the handle SheetOfLightModelID.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. SheetOfLightModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
Result
The operator deserialize_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if the sheet-of-
light model can be correctly deserialized. Otherwise, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_sheet_of_light_model
Possible Successors
measure_profile_sheet_of_light
Alternatives
create_sheet_of_light_model
See also
serialize_sheet_of_light_model
Module
3D Metrology

get_sheet_of_light_param ( : : SheetOfLightModelID,
GenParamName : GenParamValue )

Get the value of a parameter, which has been set in a sheet-of-light model.

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 357

The operator get_sheet_of_light_param is used to query the values of the different parameters of a sheet-
of-light model. The names of the desired parameters are passed in the generic parameter GenParamName, the
corresponding values are returned in GenParamValue. All these parameters can be set and changed at any time
with the operator set_sheet_of_light_param.
It is not possible to query the values of several parameters with a single operator call. In order to request the values
of several parameters you have to successively call of the operator get_sheet_of_light_param.
The values of the following model parameters can be queried:
Measurement of the profiles:

’method’: defines the method used to determine the position of the profile. The values ’default’ and ’cen-
ter_of_gravity’ both refer to the same method, whereby the position of the profile is determined column
by column with subpixel accuracy by computing the center of gravity of the gray values gi of all pixels
fulfilling the condition:
gi ≥ 0 min_gray 0
’min_gray’: the smallest gray values taken into account for the measurement of the position of the profile (see
’method’ above).
’num_profiles’: number of profiles for which memory has been allocated within the sheet-of-light model. By de-
fault, ’num_profiles’ is set to 512. If this number of profiles is exceeded during the measurement, memory
will be reallocated automatically at runtime. Since the reallocation process requires some time, we recom-
mend to set ’num_profiles’ to a reasonable value before the measurement is started.
’ambiguity_solving’: this model parameter determines which candidate shall be chosen, if the determination of
the position of the light line is ambiguous.
’first’: the first encountered candidate is returned. This method is the fastest.
’last’: the last encountered candidate is returned.
’brightest’: for each candidate, the brightness of the profile is computed and the candidate having the highest
brightness is returned. The brightness is computed according to:
n
1X
brightness = gi ,
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.
’score_type’: this model parameter selects which type of score will be calculated during the measurement of the
disparity. The score values give an advice on the quality of the computed disparity.
’none’: no score is computed.
’width’: for each pixel of the disparity, a score value is set to the local width of the profile (i.e., the number
of pixels used to compute the position of the profile).
’intensity’: for each pixel of the disparity, a score value is evaluated by computing the local intensity of the
profile according to:
n
1X
score = gi
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.

Calibration of the measurement:

’calibration’: extent of the calibration transformation which shall be applied to the disparity image:
’none’: no calibration transformation is applied.
’xz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) are taken into account, but the movement of the object during the measure-
ment is not taken into account.
’xyz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) as well as the transformation which describe the movement of the object
during the measurement are taken into account.

HALCON 24.11.1.0
358 CHAPTER 5 3D RECONSTRUCTION

’offset_scale’: a simplified set of parameters to describe the setup, that can be used with default parameters
or can be controlled by six parameters. Three of the parameters describe an anisotropic scaling: ’scale_x’ de-
scribes the scaling of a pixel in column direction into the new x-axis, ’scale_y’ describes the linear movement
between two profiles, and ’scale_z’ describes the scaling of to measured disparities into the new z-axis. The
other three parameters describe the offset of the frame of reference of the resulting x,y,z values (’offset_x’,
’offset_y’, ’offset_z’).
’camera_parameter’: the internal parameters of the camera used for the measurement. Those parameters are
required when the calibration extent has been set to ’xz’ or ’xyz’.
’camera_pose’: the pose of the world coordinate system relative to the camera coordinate system. This pose is
required when the calibration extent has been set to ’xz’ or ’xyz’.
’lightplane_pose’: the pose of the light-plane coordinate system relative to the world coordinate system. The
light-plane coordinate system must be chosen so that its plane z=0 coincides with the light plane described
by the light line projector. This pose is required when the calibration extent has been set to ’xz’ or ’xyz’.
’movement_pose’: a pose representing the movement of the object between two successive profile images with re-
spect to the measurement system built by the camera and the laser. This pose is required when the calibration
extent has been set to ’xyz’.
’scale’: with this parameter you can scale the 3D coordinates X, Y and Z that result when applying the calibration
transformations to the disparity image. ’scale’ must be specified as the ratio desired unit/original unit. The
original unit is determined by the coordinates of the calibration object. If you use the standard calibration
plate the original unit is meter. This parameter can only be set if the calibration extent has been set to
’offset_scale’, ’xz’ or ’xyz’. By default, ’scale’ is set to 1.0.
’scale_x’: This value defines the width of a pixel in 3D space. The value is only applicable if the calibration extend
is set to ’offset_scale’. By default, ’scale_x’ is set to 1.0.
’scale_y’: This value defines the linear movement between two profiles in 3D space. The value is only applicable
if the calibration extend is set to ’offset_scale’. By default, ’scale_y’ is set to 10.0.
’scale_z’: This value defines the height of disparities in 3D space. The value is only applicable if the calibration
extend is set to ’offset_scale’. By default, ’scale_z’ is set to 1.0.
’offset_x’: This value defines the x offset of reference frame for 3D results. The value is only applicable if the
calibration extend is set to ’offset_scale’. By default, ’offset_x’ is set to 0.0.
’offset_y’: This value defines the y offset of reference frame for 3D results. The value is only applicable if the
calibration extend is set to ’offset_scale’. By default, ’offset_y’ is set to 0.0.
’offset_z’: This value defines the z offset of reference frame for 3D results. The value is only applicable if the
calibration extend is set to ’offset_scale’. By default, ’offset_z’ is set to 0.0.

Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter that shall be queried.
Default: ’method’
List of values: GenParamName ∈ {’min_gray’, ’method’, ’ambiguity_solving’, ’score_type’,
’num_profiles’, ’calibration’, ’camera_parameter’, ’camera_pose’, ’lightplane_pose’, ’movement_pose’,
’scale’, ’scale_x’, ’scale_y’, ’scale_z’, ’offset_x’, ’offset_y’, ’offset_z’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Value of the model parameter that shall be queried.
Result
The operator get_sheet_of_light_param returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 359

Possible Predecessors
query_sheet_of_light_params, set_sheet_of_light_param
Possible Successors
measure_profile_sheet_of_light, set_sheet_of_light_param,
apply_sheet_of_light_calibration
Module
3D Metrology

get_sheet_of_light_result ( : ResultValue : SheetOfLightModelID,


ResultName : )

Get the iconic results of a measurement performed with the sheet-of light technique.
The operator get_sheet_of_light_result provides access to the results of the calibrated and uncalibrated
measurements performed with a given sheet-of-light model. The different kinds of results can be selected by setting
the value of the parameter ResultName as described below:
Non-calibrated results:

’disparity’: the measured disparity i.e., the subpixel row value at which the profile was detected is returned for
each pixel. The disparity values can be considered as non-calibrated pseudo-range values.
’score’: the score values computed according to the value of the parameter ’score_type’ is returned. If
the parameter ’score_type’ has been set to ’none’, no score value is computed during the measure-
ment, therefore the returned image is empty. Refer to create_sheet_of_light_model and
set_sheet_of_light_param for details on the possible values of the model parameter ’score_type’.

Calibrated results:

’x’: The calibrated X-coordinates of the reconstructed surface is returned as an image.


’y’: The calibrated Y-coordinates of the reconstructed surface is returned as an image.
’z’: The calibrated Z-coordinates of the reconstructed surface is returned as an image.

Please, note that the pixel values of the images returned when setting ResultName to ’x’, ’y’ or ’z’ have the
semantic of coordinates with respect to the world coordinate system that is implicitly defined during the cali-
bration of the system. The unit of the returned coordinates depends on the value of the parameter ’scale’. (see
create_sheet_of_light_model and set_sheet_of_light_param for details on the possible values
of the model parameter ’scale’.)
The operator get_sheet_of_light_result returns an empty object if the desired result has not been com-
puted.
Parameters
. ResultValue (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Desired measurement result.
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model to be used.
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Specify which result of the measurement shall be provided.
Default: ’disparity’
List of values: ResultName ∈ {’disparity’, ’score’, ’x’, ’y’, ’z’}
Result
The operator get_sheet_of_light_result returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON 24.11.1.0
360 CHAPTER 5 3D RECONSTRUCTION

• Processed without parallelization.


Module
3D Metrology

get_sheet_of_light_result_object_model_3d (
: : SheetOfLightModelID : ObjectModel3D )

Get the result of a calibrated measurement performed with the sheet-of-light technique as a 3D object model.
The operator get_sheet_of_light_result_object_model_3d returns the result of a fully calibrated
sheet-of-light measurement as a 3D object model. The handle of the sheet-of-light model with which the mea-
surement is performed must be passed to SheetOfLightModelID. The calibration extent of the sheet-of-light
model (’calibration’) must have been set to ’xyz’ or ’offset_scale’ before applying the measurement, otherwise the
computed coordinates cannot be returned as a 3D object model and an exception is raised.
The handle of the 3D object model resulting from the measurement is returned in ObjectModel3D. For the
3D points within this 3D object model no triangular meshing is available, therefore no faces are stored in the
3D object model. If a 3D object model with triangular meshing is required for the subsequent processing, use
the operator get_sheet_of_light_result in order to retrieve the ’x’, ’y’, and ’z’ coordinates from the
sheet-of-light model and then call the operator xyz_to_object_model_3d with suitable parameters. Refer
to xyz_to_object_model_3d for more information about 3D object models.
The unit of the returned coordinates depends on the value of the parameter ’scale’ that was set for the
sheet-of-light model before applying the measurement. See create_sheet_of_light_model and
set_sheet_of_light_param for details on the possible values of the model parameter ’scale’. The op-
erator get_sheet_of_light_result_object_model_3d returns a handle to an empty 3D object model
if the desired result has not been measured yet.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle for accessing the sheet-of-light model.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Result
The operator get_sheet_of_light_result_object_model_3d returns the value 2 (H_MSG_TRUE) if
the given parameters are correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_sheet_of_light_model, measure_profile_sheet_of_light,
calibrate_sheet_of_light
Possible Successors
clear_object_model_3d
Module
3D Metrology

measure_profile_sheet_of_light (
ProfileImage : : SheetOfLightModelID, MovementPose : )

Process the profile image provided as input and store the resulting disparity to the sheet-of-light model.

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 361

The operator measure_profile_sheet_of_light processes the ProfileImage and stores the resulting
disparity values to the sheet-of-light model. Please note that ProfileImage will only be processed in the
region defined by ProfileRegion as set with the operator create_sheet_of_light_model. Since
ProfileImage is processed column by column, the profile must be oriented roughly horizontal.
Influence of different model parameters
If the model parameter ’score_type’ has been set to ’intensity’ or ’width’, score values are also computed and stored
into the model. Refer to set_sheet_of_light_param for details on the possible values of ’score_type’.
If the model parameter ’calibration’ has been set to ’xz’, ’xyz’, or ’offset_scale’ and all parameters required to deter-
mine the calibration transformation have been set to the sheet-of-light model, the calibration transformations will be
automatically applied to the disparity values after the measurement. Refer to set_sheet_of_light_param
for details on setting the calibration parameters to the sheet-of-light model.
Setting MovementPose
MovementPose describes the movement of the object between the acquisition of the previous profile and the
acquisition of the current profile.
If the model parameter ’calibration’ has been set to ’none’ or ’xz’ (see set_sheet_of_light_param)
the movement of the object is not taken into consideration by the calibration transformation. Therefore,
MovementPose is ignored, and it can be set to an empty tuple.
If the model parameter ’calibration’ has been set to ’xyz’, the pose describing the movement of the object must
be specified to the sheet-of-light model. This can be done here with MovementPose or with the parameter
’movement_pose’ in the operator set_sheet_of_light_param.
If the model parameter ’calibration’ has been set to ’offset_scale’, a movement can be specified, but it should be
considered, that the space to which this transformation is applied is most probably not metrically.
If the movement of the object between the recording of two successive profiles is constant, we recommend to
set MovementPose here to an empty tuple, and to set the constant pose via the parameter ’movement_pose’ in
the operator set_sheet_of_light_param. This configuration is often encountered, for example when the
object under measurement is moved by a conveyor belt and measured by a fixed measurement system.
If the movement of the object between the recording of two successive profiles is not constant, for example because
the measurement system is moved over the object by a robot, you must set MovementPose here for each call of
measure_profile_sheet_of_light.
MovementPose must be expressed in the world coordinate system that is implicitly defined during the calibration
of the measurement system.
Parameters
. ProfileImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image.
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. MovementPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / real
Pose describing the movement of the scene under measurement between the previously processed profile
image and the current profile image.
Result
The operator measure_profile_sheet_of_light returns the value 2 (H_MSG_TRUE) if the given param-
eters are correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SheetOfLightModelID

HALCON 24.11.1.0
362 CHAPTER 5 3D RECONSTRUCTION

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
apply_sheet_of_light_calibration, get_sheet_of_light_result
See also
query_sheet_of_light_params, get_sheet_of_light_param,
get_sheet_of_light_result, apply_sheet_of_light_calibration
Module
3D Metrology

query_sheet_of_light_params ( : : SheetOfLightModelID,
QueryName : GenParamName )

For a given sheet-of-light model get the names of the generic iconic or control parameters that can be used in the
different sheet-of-light operators.
The operator query_sheet_of_light_params returns the names of the generic parameters that are sup-
ported by the following operators create_sheet_of_light_model, set_sheet_of_light_param,
get_sheet_of_light_param and get_sheet_of_light_result. The parameter QueryName is
used to select the desired parameter group:

’create_model_params’: create_sheet_of_light_model – Parameters for adjusting the sheet-of-light


model during its creation.
’set_model_params’: set_sheet_of_light_param – Parameters for adjusting the parameters of an avail-
able sheet-of-light model.
’get_model_params’: get_sheet_of_light_param – Parameters for querying the values of the parameters
of a sheet-of-light model.
’get_result_objects’: get_sheet_of_light_result – Parameters for accessing the iconic objects resulting
from the measurement.

The returned parameter list does not depend on the current state of the model or its results.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. QueryName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the parameter group.
Default: ’create_model_params’
List of values: QueryName ∈ {’create_model_params’, ’set_model_params’, ’get_model_params’,
’get_result_objects’}
. GenParamName (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string
List containing the names of the supported generic parameters.
Result
The operator query_sheet_of_light_params returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Successors
create_sheet_of_light_model, set_sheet_of_light_param,
get_sheet_of_light_param, get_sheet_of_light_result
Module
3D Metrology

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 363

read_sheet_of_light_model ( : : FileName : SheetOfLightModelID )

Read a sheet-of-light model from a file and create a new model.


The operator read_sheet_of_light_model reads the sheet-of-light model from the file FileName and
creates a new model that is an identical copy of the saved model. The parameter SheetOfLightModelID
returns the handle of the new model. The model file FileName must have been created by the operator
write_sheet_of_light_model. The default HALCON file extension for sheet-of-light model is ’solm’.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
Name of the sheet-of-light model file.
Default: ’sheet_of_light_model.solm’
File extension: .solm
. SheetOfLightModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
Result
The operator read_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if the named file was found
and correctly read. Otherwise, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
measure_profile_sheet_of_light
Alternatives
create_sheet_of_light_model
See also
write_sheet_of_light_model
Module
3D Metrology

reset_sheet_of_light_model ( : : SheetOfLightModelID : )

Reset a sheet-of-light model.


The operator reset_sheet_of_light_model resets a sheet-of-light model that was created by
create_sheet_of_light_model. All indices and result arrays used by the model are reset. The parameters
of the model remain unchanged. The handle of the model is passed in SheetOfLightModelID.
Parameters

. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle


Handle of the sheet-of-light model.
Result
The operator reset_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if a valid handle is passed
and the sheet-of-light model can be reset correctly. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON 24.11.1.0
364 CHAPTER 5 3D RECONSTRUCTION

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
This operator modifies the state of the following input parameter:

• SheetOfLightModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
See also
clear_sheet_of_light_model
Module
3D Metrology

serialize_sheet_of_light_model (
: : SheetOfLightModelID : SerializedItemHandle )

Serialize a sheet-of-light model.


serialize_sheet_of_light_model serializes a sheet-of-light model (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data
that is written in a file by write_sheet_of_light_model is converted to a serialized item. The sheet-of-
light model is defined by the handle SheetOfLightModelID. The serialized model is returned by the handle
SerializedItemHandle and can be deserialized by deserialize_sheet_of_light_model.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
The operator serialize_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if the passed han-
dle of the sheet-of-light model is valid and if the model can be serialized into the serialized item. Otherwise, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_sheet_of_light_model, set_sheet_of_light_param
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_sheet_of_light_model
See also
deserialize_sheet_of_light_model
Module
3D Metrology

set_profile_sheet_of_light (
ProfileDisparityImage : : SheetOfLightModelID, MovementPoses : )

Set sheet of light profiles by measured disparities.

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 365

set_profile_sheet_of_light adds sheet-of-light profiles to the sheet-of-light model


SheetOfLightModelID. The profiles are specified as rows in a disparity image in
ProfileDisparityImage. Each of the profiles can have an individual Pose set in MovementPoses
which is interpreted as relative movement to the previous row. If no pose is set, the default transformation is used,
which can be set by set_sheet_of_light_param. If only one pose is set, this pose will become the default
transformation.
Parameters
. ProfileDisparityImage (input_object) . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Disparity image that contains several profiles.
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. MovementPoses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; integer / real
Poses describing the movement of the scene under measurement between the previously processed profile
image and the current profile image.
Result
The operator set_profile_sheet_of_light returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SheetOfLightModelID

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
get_sheet_of_light_result, get_sheet_of_light_result_object_model_3d
See also
query_sheet_of_light_params, get_sheet_of_light_param,
get_sheet_of_light_result, apply_sheet_of_light_calibration
Module
3D Metrology

set_sheet_of_light_param ( : : SheetOfLightModelID, GenParamName,


GenParamValue : )

Set selected parameters of the sheet-of-light model.


The operator set_sheet_of_light_param is used to set or change a single parameter of a sheet-of-light
model in order to adapt the model to a particular measurement task. All parameters, except the internal camera pa-
rameters ’camera_parameters’ and the following poses ’camera_pose’, ’lightplane_pose’, and ’movement_pose’
can also be set while creating a sheet-of-light model with create_sheet_of_light_model. The current
configuration of the sheet-of-light model can be queried with the operator get_sheet_of_light_param.
A list with the names of all parameters that can be set for the sheet-of-light model is returned by
query_sheet_of_light_params.
The following overview lists the different generic parameters with the respective value ranges and default values:
Measurement of the profiles:

’method’: defines the method used to determine the position of the profile. The values ’default’ and ’cen-
ter_of_gravity’ both refer to the same method, whereby the position of the profile is determined column

HALCON 24.11.1.0
366 CHAPTER 5 3D RECONSTRUCTION

by column with subpixel accuracy by computing the center of gravity of the gray values gi of all pixels
fulfilling the condition:
gi ≥ 0 min_gray 0
’min_gray’: lowest gray values taken into account for the measurement of the position of the profile (see ’cen-
ter_of_gravity’).
Suggested values: 20, 50, 100, 128, 200, 220, 250
Default: 100
’num_profiles’: number of profiles for which memory has been allocated within the sheet-of-light model. By
default, ’num_profiles’ is set to 512. If this number of profiles is exceeded, memory will be reallocated
automatically during the measurement.
Suggested values: 1, 2, 50, 100, 512, 1024, 3000
Default: 512
’ambiguity_solving’: method applied to determine which candidate shall be chosen if the determination of the
position of the profile is ambiguous.
’first’: the first encountered candidate is returned. This method is the fastest.
’last’: the last encountered candidate is returned.
’brightest’: for each candidate, the brightness of the profile is computed and the candidate having the highest
brightness is returned. The brightness is computed according to:
n
1X
brightness = gi ,
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.
Default: ’first’
’score_type’: method used to calculate a score for the measurement of the position of the profile.
’none’: no score is computed.
’width’: for each pixel of the disparity, the score value is set to the number of pixels used to determine the
disparity value.
’intensity’: for each pixel of the disparity, a score value is evaluated by computing the local intensity of the
profile according to:
n
1X
score = gi
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.
Default: ’none’

Calibration of the measurement:

’calibration’: extent of the calibration transformation which shall be applied to the disparity image:
’none’: no calibration transformation is applied.
’xz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) are taken into account, but the movement of the object during the measure-
ment is not taken into account.
’xyz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) as well as the transformation which describe the movement of the object
during the measurement are taken into account.
’offset_scale’: a simplified set of parameters to describe the setup, that can be used with default parameters
or can be controlled by six parameters. Three of the parameters describe an anisotropic scaling: ’scale_x’ de-
scribes the scaling of a pixel in column direction into the new x-axis, ’scale_y’ describes the linear movement
between two profiles, and ’scale_z’ describes the scaling of to measured disparities into the new z-axis. The
other three parameters describe the offset of the frame of reference of the resulting x,y,z values (’offset_x’,
’offset_y’, ’offset_z’).
Default: ’none’

HALCON/HDevelop Reference Manual, 2024-11-13


5.5. SHEET OF LIGHT 367

’camera_parameter’: the internal parameters of the camera used for the measurement. Those parameters are
required if the calibration extent has been set to ’xz’ or ’xyz’. If calibrate_sheet_of_light shall be
used for calibration, this parameter is used to set the initial camera parameters.
’calibration_object’: the calibration object used for calibration with calibrate_sheet_of_light. If
calibrate_sheet_of_light shall be used for calibration, this parameter must be set to the filename
of a calibration object created with create_sheet_of_light_calib_object.
’camera_pose’: the pose that transforms the camera coordinate system into the world coordinate system, i.e., the
pose that could be used to transform point coordinates from the world coordinate system into the camera
coordinate system. This pose is required if the calibration extent has been set to ’xz’ or ’xyz’.
Note that the world coordinate system is implicitly defined by setting the ’camera_pose’.
’lightplane_pose’: the pose that transforms the light plane coordinate system into the world coordinate system,
i.e., the pose that could be used to transform point coordinates from the world coordinate system into the
light plane coordinate system. The light plane coordinate system must be chosen such that the plane z=0
coincides with the light plane. This pose is required if the calibration extent has been set to ’xz’ or ’xyz’.
’movement_pose’: a pose representing the movement of the object between two successive profile images with
respect to the measurement system built by the camera and the laser. This pose must be expressed in the
world coordinate system. It is required if the calibration extent has been set to ’xyz’.
’scale’: with this value you can scale the 3D coordinates X, Y and Z that result when applying the calibration
transformations to the disparity image. The model parameter ’scale’ must be specified as the ratio desired
unit/original unit. The original unit is determined by the coordinates of the calibration object. If the original
unit is meters (which is the case if you use the standard calibration plate), you can set ’scale’ to the desired
unit directly by selecting ’m’, ’cm’, ’mm’, ’microns’, or ’um’. This parameter can only be set if the calibration
extent has been set to ’offset_scale’, ’xz’ or ’xyz’.
Suggested values: ’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6
Default value: 1.0
’scale_x’: This value defines the width of a pixel in the 3D space. This parameter can only be set if the calibration
extent has been set to ’offset_scale’.
Suggested values: 10.0, 1.0, 0.01, 0.001, 1.0e-6
Default value: 1.0
’scale_y’: This value defines the linear movement between two profiles in the 3D space. This parameter can only
be set if the calibration extent has been set to ’offset_scale’.
Suggested values: 100.0, 10.0, 1.0, 0.1, 1.0e-6
Default value: 10.0
’scale_z’: This value defines the height of a pixel in the 3D space. This parameter can only be set if the calibration
extent has been set to ’offset_scale’.
Suggested values: 10.0, 1.0, 0.01, 0.001, 1.0e-6
Default value: 1.0
’offset_x’: This value defines the x offset of reference frame for 3D results. This parameter can only be set if the
calibration extent has been set to ’offset_scale’.
Suggested values: 10.0, 0.0, 0.01, 0.001, 1.0e-6
Default value: 0.0
’offset_y’: This value defines the y offset of reference frame for xyz results. This parameter can only be set if the
calibration extent has been set to ’offset_scale’.
Suggested values: 10.0, 0.0, 0.01, 0.001, 1.0e-6
Default value: 0.0
’offset_z’: This value defines the z offset of reference frame for 3D results. This parameter can only be set if the
calibration extent has been set to ’offset_scale’.
Suggested values: 10.0, 0.0, 0.01, 0.001, 1.0e-6
Default value: 0.0

HALCON 24.11.1.0
368 CHAPTER 5 3D RECONSTRUCTION

Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the model parameter that shall be adjusted for the sheet-of-light model.
Default: ’method’
List of values: GenParamName ∈ {’method’, ’ambiguity_solving’, ’score_type’, ’num_profiles’,
’min_gray’, ’scale’, ’calibration’, ’calibration_object’, ’camera_parameter’, ’camera_pose’, ’lightplane_pose’,
’movement_pose’, ’scale_x’, ’scale_y’, ’scale_z’, ’offset_x’, ’offset_y’, ’offset_z’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Value of the model parameter that shall be adjusted for the sheet-of-light model.
Default: ’center_of_gravity’
Suggested values: GenParamValue ∈ {’default’, ’center_of_gravity’, ’last’, ’first’, ’brightest’, ’none’,
’intensity’, ’width’, ’xz’, ’xyz’, ’offset_scale’, ’m’, ’cm’, ’mm’, ’um’, ’microns’, 1.0, 1e-2, 1e-3, 1e-6}
Result
The operator set_sheet_of_light_param returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SheetOfLightModelID

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
get_sheet_of_light_param, measure_profile_sheet_of_light,
apply_sheet_of_light_calibration
Alternatives
create_sheet_of_light_model
See also
query_sheet_of_light_params, get_sheet_of_light_param,
get_sheet_of_light_result
Module
3D Metrology

write_sheet_of_light_model ( : : SheetOfLightModelID,
FileName : )

Write a sheet-of-light model to a file.


The operator write_sheet_of_light_model writes the sheet-of-light model SheetOfLightModelID
to the file FileName. The model can be read again with read_sheet_of_light_model. The
stored data contains all generic model parameters (see set_sheet_of_light_param) and the results of
calibrate_sheet_of_light.
The default HALCON file extension for sheet-of-light model is ’solm’.

HALCON/HDevelop Reference Manual, 2024-11-13


5.6. STRUCTURED LIGHT 369

Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the sheet-of-light model file.
Default: ’sheet_of_light_model.solm’
File extension: .solm
Result
The operator write_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if the passed handle is
valid and if the model can be written into the named file. Otherwise, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_sheet_of_light_model, set_sheet_of_light_param
See also
read_sheet_of_light_model
Module
3D Metrology

5.6 Structured Light

This chapter describes the usage of structured light for 3D reconstruction.


Concept of Structured Light
The basic concept behind structured light is to use a structured illumination, i.e. an illumination showing well
known patterns. The way those patterns appear in the scene after hitting surfaces helps to further analyze (e.g.,
perform an inspection Inspection / Structured Light) or reconstruct the surfaces.
For non-specular (lambertian or diffuse) surfaces, a 3D surface can be reconstructed using a projector projecting
light like an ’inverse camera’. For every projected pattern image, a camera image of the projection on the surface
is acquired. Using the decoded correspondence between projector coordinates lighting the camera coordinates, as
well as calibration information, the 3D surface is reconstructed.
In the following, the steps that are required to use structured light are described briefly.

Create a structured light model: In the first step, a structured light model is created with

• create_structured_light_model (ModelType=’3d_reconstruction’)

or read with

• read_structured_light_model.

Set the model parameters: The different structured light model parameters can then be set with

• set_structured_light_model_param

or queried with

• get_structured_light_model_param.

HALCON 24.11.1.0
370 CHAPTER 5 3D RECONSTRUCTION

The pattern parameters ’pattern_width’, ’pattern_height’, ’pattern_orientation’, and ’pattern_type’ specify


along with the stripe parameters ’min_stripe_width’ and ’single_stripe_width’ the specifications of the pat-
tern images to be used to illuminate the surface. Finally, the ’persistence’ parameter can be enabled to debug
intermediate results.

Generate the pattern images: The pattern images are to be generated with
gen_structured_light_pattern after setting all relevant parameters. Please ensure that the
output images are as needed in the particular setup.
Use the patterns to illuminate the surface and acquire the camera images: At this stage, the pattern images
are projected. The respective image of the illuminated surface is acquired by the camera for each pattern
image.
When calibrating the system, images of the illuminated calibration object need to be acquired. The calibra-
tion process is shown in detail in the example program structured_light_calibration.hdev.
The obtained calibration information can then be specified with the parameter ’camera_setup_model’ of
set_structured_light_model_param.

Decode the acquired images: The acquired CameraImages can be decoded with
decode_structured_light_pattern. Upon calling this operator, the correspondence images are
created and stored in the model StructuredLightModel.
Get the results: The decoded ’correspondence_image’, as well as other results can be queried with
get_structured_light_object. For more details of the different objects that can be queried, please
refer to the operator’s documentation.
Perform the reconstruction: The reconstructed surface can be obtained with
reconstruct_surface_structured_light.

Further operators
The structured light model offers various other operators that help access and update the various parameters of the
model.
The operator write_structured_light_model enables writing the structured light model to a file. Please
note that previously generated pattern images are not written in this file. A structured light model file can be read
using read_structured_light_model.
Furthermore, it is possible to serialize and deserialize the structured light model using
serialize_structured_light_model and deserialize_structured_light_model.
Further Information
See also the “Solution Guide Basics” for further details. For a list of operators, please refer to Inspection
/ Structured Light.

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 6

Calibration

This chapter provides information regarding camera calibration.


General Objectives
To achieve maximum accuracy of measurement for your camera setup, you have to calibrate it accordingly. There-
fore, a camera model is determined, which describes the projection of a 3D world point into a (sub-)pixel in the
image.
HALCON provides a wide range of operators to approach diverse tasks related to calibration, such as

• describing and finding a calibration object (Calibration / Calibration Object),


• the projection of points from the 3D scene onto the image plane and the other way around (Calibration /
Projection, Calibration / Inverse Projection),
• compensating perspective and radial distortions (Calibration / Rectification),
• handling the camera parameters (Calibration / Camera Parameters),
• performing a self-calibration (Calibration / Self-Calibration), and
• calibrating different setups consisting of

– one camera (Calibration / Monocular),


– multiple cameras (Calibration / Binocular, Calibration / Multi-View), or
– a camera in combination with a robot (Calibration / Hand-Eye).

This chapter gives guidance regarding the basic concept of retrieving the internal and external parameters of your
camera. The following paragraphs state how to successfully calibrate a camera. In particular, they describe

• the needed calibration object,


• the individual steps to calibrate the cameras, including

– how to prepare the calibration input data,


– how to perform the actual calibration with calibrate_cameras, and
– how to check the success of the calibration,

• the camera parameters,


• additional information about the calibration process, including

– how to obtain an appropriate calibration plate,


– how to take a set of suitable images, and
– which distortion model to use,

371
372 CHAPTER 6 CALIBRATION

• the available 3D camera models and how 3D points are transformed into the image coordinate system, and
• limitations related to specific camera types.

Calibration Object
For a successful calibration of your camera setup, at least one calibration object with accurately known metric
properties is needed, e.g., a HALCON calibration plate. For the calibration, take a series of images of the calibra-
tion object in different positions and orientations. The success of the calibration highly depends on the quality of
the calibration object and the images. So you might want to exercise special diligence during the acquisition of the
calibration images. See the section “How to take a set of suitable images?” for further information.
A calibration plate is covered by multiple calibration marks, which are extracted in the calibration images in order
to retrieve their coordinates. The orientation of the plate has to be known distinctively, hence, a finder pattern is
also part of the imprint.
Your distributor can provide you with two different types of standard HALCON calibration plates:

Calibration plate with hexagonally arranged marks: As finder pattern, there are special groups of mark
hexagons where some of the marks contain dot-shaped holes (see create_caltab). One finder pat-
tern has to be visible to locate the calibration plate. To make sure the plate is not inverted, at least a second
one needs to be seen, but the plate does not have to be fully visible in the image. The origin of the coordinate
system is located at the center of the central mark of the first finder pattern. The z-axis of the coordinate
system is pointing into the calibration plate, its x-axis is pointing to the right, and its y-axis is pointing
downwards with the direction of view along the z-axis.
When using camera_calibration instead of calibrate_cameras, this calibration plate is not
applicable.

HALCON calibration plate with hexagonally arranged marks.

Calibration plate with rectangularly arranged marks: The finder pattern consists of the surrounding frame and
the triangular corner marker (see gen_caltab). Thus, the plate has to be fully visible in the image. The
origin is located in the middle of the surface of the calibration plate. The z-axis of the coordinate system
is pointing into the calibration plate, its x-axis is pointing to the right, and its y-axis is pointing downwards
with the direction of view along the z-axis.

HALCON/HDevelop Reference Manual, 2024-11-13


373

HALCON calibration plate with rectangularly arranged marks.

When acquiring your calibration images, note that there are different recommendations on how to take them,
depending on your used calibration plate (see section “How to take a set of suitable images?”).
Preparing the Calibration Input Data
Before calling a calibration operator (e.g., calibrate_cameras), you must create and adapt the calibration
data model with the following steps:

1. Create a calibration data model with the operator create_calib_data, specifying the number of
cameras in the setup and the number of used calibration objects.

2. Specify the camera type and the initial internal camera parameters with the operator
set_calib_data_cam_param.
3. Specify the description of all calibration objects with the operator
set_calib_data_calib_object.

4. Collect observation data with the operators find_calib_object or


set_calib_data_observ_points, i.e., obtain the image coordinates of the extracted calibra-
tion marks of the calibration object and a roughly estimated pose of the calibration object relative to the
observing camera.
5. Configure the calibration process, e.g., exclude certain camera parameters from the optimization. You can
specify these parameters with the operator set_calib_data. For example, if the image sensor cell size
of camera 0 is known precisely and only the rest of the parameters needs to be calibrated, you call

set_calib_data(CalibDataID, ’camera’, 0, ’excluded_settings’,


[’sx’,’sy’]).

Performing the Actual Camera Calibration and Obtaining its Results


Using all the information stored within the calibration data model, the actual calibration can be performed calling
calibrate_cameras. Thereby, the input model is modified by optimizing the initial internal camera parame-
ters, computing and adding further data like the external camera parameters or standard deviations. Furthermore,
the standard deviations and covariances of the calibrated internal parameters and the root mean square error of the
back projection are calculated in order to check the success of the calibration.
The results can then be queried with the operator get_calib_data.
Checking the Success of the Calibration

HALCON 24.11.1.0
374 CHAPTER 6 CALIBRATION

After a successful calibration, the root mean square error (RMSE) of the back projection of the optimization is
returned in Error (in pixels). The error gives a general indication whether the optimization was successful as it
corresponds to the average distance (in pixels) between the back projected calibration points and their extracted
image coordinates.
If only a single camera is calibrated, an Error in the order of 0.1 pixel (the typical detection error by extraction
of the coordinates of the projected calibration markers) is an indication that the optimization fits the observation
data well. If Error strongly differs from 0.1 pixels, the calibration did not perform well. Reasons for this might
be, e.g., a poor image quality, an insufficient number of calibration images, or an inaccurate calibration plate.
For information about how to check the success of the calibration using a multi-view camera setup, see the respec-
tive section in the chapter Calibration / Multi-View.
Camera Parameters
Regarding camera parameters, you can distinguish between internal and external camera parameters.

Internal camera parameters: These parameters describe the characteristics of the used camera, especially the di-
mension of the sensor itself and the projection properties of the used combination of lens, camera, and frame
grabber. Below is an overview of all available camera types and their respective parameters CameraParam.
In the list, “projective cameras” refers to the property that the lens performs a perspective projection on the
object-side of the lens, while “telecentric cameras” refers to the property that the lens performs a telecentric
projection on the object-side of the lens.

Area scan cameras have 9 to 16 internal parameters depending on the camera type.
For reasons explained below, parameters that are marked with an * asterisk are fixed and not estimated
by the algorithm.
Area scan cameras with regular lenses
Projective area scan cameras with regular lenses
• ’area_scan_division’:
[’area_scan_division’, Focus, Kappa, Sx, Sy*, Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_polynomial’:
[’area_scan_polynomial’, Focus, K1, K2, K3, P1, P2, Sx, Sy*, Cx, Cy, ImageWidth, Image-
Height]
Telecentric area scan cameras with regular lenses
• ’area_scan_telecentric_division’:
[’area_scan_telecentric_division’, Magnification, Kappa, Sx, Sy*, Cx, Cy, ImageWidth, Im-
ageHeight]
• ’area_scan_telecentric_polynomial’:
[’area_scan_telecentric_polynomial’, Magnification, K1, K2, K3, P1, P2, Sx, Sy*, Cx, Cy,
ImageWidth, ImageHeight]
Area scan cameras with tilt lenses
Projective area scan cameras with tilt lenses
• ’area_scan_tilt_division’:
[’area_scan_tilt_division’, Focus, Kappa, ImagePlaneDist, Tilt, Rot, Sx, Sy*, Cx, Cy, Im-
ageWidth, ImageHeight]
• ’area_scan_tilt_polynomial’:
[’area_scan_tilt_polynomial’, Focus, K1, K2, K3, P1, P2, ImagePlaneDist, Tilt, Rot, Sx, Sy*,
Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_tilt_image_side_telecentric_division’:
[’area_scan_tilt_image_side_telecentric_division’, Focus, Kappa, Tilt, Rot, Sx*, Sy*, Cx, Cy,
ImageWidth, ImageHeight]
• ’area_scan_tilt_image_side_telecentric_polynomial’:
[’area_scan_tilt_image_side_telecentric_polynomial’, Focus, K1, K2, K3, P1, P2, Tilt, Rot,
Sx*, Sy*, Cx, Cy, ImageWidth, ImageHeight]
Telecentric area scan cameras with tilt lenses
• ’area_scan_tilt_bilateral_telecentric_division’:
[’area_scan_tilt_bilateral_telecentric_division’, Magnification, Kappa, Tilt, Rot, Sx*, Sy*,
Cx, Cy, ImageWidth, ImageHeight]

HALCON/HDevelop Reference Manual, 2024-11-13


375

• ’area_scan_tilt_bilateral_telecentric_polynomial’:
[’area_scan_tilt_bilateral_telecentric_polynomial’, Magnification, K1, K2, K3, P1, P2, Tilt,
Rot, Sx*, Sy*, Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_tilt_object_side_telecentric_division’:
[’area_scan_tilt_object_side_telecentric_division’, Magnification, Kappa, ImagePlaneDist,
Tilt, Rot, Sx, Sy*, Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_tilt_object_side_telecentric_polynomial’:
[’area_scan_tilt_object_side_telecentric_polynomial’, Magnification, K1, K2, K3, P1, P2,
ImagePlaneDist, Tilt, Rot, Sx, Sy*, Cx, Cy, ImageWidth, ImageHeight]
Area scan cameras with hypercentric lenses
Projective area scan cameras with hypercentric lenses
• ’area_scan_hypercentric_division’:
[’area_scan_hypercentric_division’, Focus, Kappa, Sx, Sy*, Cx, Cy, ImageWidth, Image-
Height]
• ’area_scan_hypercentric_polynomial’:
[’area_scan_hypercentric_polynomial’, Focus, K1, K2, K3, P1, P2, Sx, Sy*, Cx, Cy, Im-
ageWidth, ImageHeight]
Description of the internal camera parameters of area scan cameras:

CameraType: Type of the camera, as listed above.


Focus: Focal length of the lens (only for lenses that perform a perspective projection on the object
side of the lens).
The initial value is the nominal focal length of the used lens, e.g., 0.008m.
Magnification: Magnification of the lens (only for lenses that perform a telecentric projection on
the object side of the lens).
The initial value is the nominal magnification of the used telecentric lens (the image size
divided by the object size), e.g., 0.2.
Kappa (κ): Distortion coefficient to model the radial lens distortions (only for the division
model).
Use 0.0 m−2 as initial value.
K1, K2, K3, P1, P2: Distortion coefficients to model the radial (K1 , K2 , K3 ) and decentering
(P1 , P2 ) lens distortions (only for the polynomial model).
Use 0.0 as initial value for all five coefficients.
ImagePlaneDist: Distance of the exit pupil of the lens to the image plane. The exit pupil is the
(virtual) image of the aperture stop (typically the diaphragm), as viewed from the image side
of the lens. Typical values are in the order of a few centimeters to very large values if the lens
is close to being image-side telecentric.
Tilt, Rot: The tilt angle tilt (0◦ ≤ tilt < 90◦ ) describes the angle by which the optical axis
is tilted with respect to the normal of the sensor plane (corresponds to a rotation around the
x-axis). The rotation angle rot (0◦ ≤ rot < 360◦ ) describes the rotation around the optical
axis (z-axis). For a rotation rot = 0◦ the optical axis gets tilted vertically down with respect
to the camera housing, rot = 90◦ corresponds to the optical axis being tilted horizontally to
the left (direction of view along the z-axis), rot = 180◦ corresponds to the optical axis being
tilted vertically up, and rot = 270◦ corresponds to the optical axis being tilted horizontally to
the right by tilt.
These parameters are only used if a tilt lens is part of the camera setup.

HALCON 24.11.1.0
376 CHAPTER 6 CALIBRATION

y
x

z 2. tilt

y
x

z 1. rot
Im
a ge
Pl
an
eD
is
t

The tilt of the lens is described by the parameters rot , tilt and ImagePlaneDist. rot
describes the orientation of the tilt axis in relation to the x-axis of the sensor and has to be
applied first. tilt describes the actual tilt of the lens. ImagePlaneDist is the distance of
the exit pupil of the lens to the image plane.
These angles are typically roughly known based on the considerations that led to the use of
the tilt lens or can be read off from the mechanism by which the lens is tilted.
Sx, Sy: Scale factors. They correspond to the horizontal and vertical distance between two neigh-
boring cells on the sensor. Since in most cases the image signal is sampled line-synchronously,
Sy is determined by the dimension of the sensor and does not need to be estimated by the cal-
ibration process.
The initial values depend on the dimensions of the used chip of the camera. See the technical
specification of your camera for the actual values. Attention: These values increase if the
image is subsampled!
As projective cameras are described through the pinhole camera model, it is impossible to
determine Focus, Sx , and Sy simultaneously. Therefore, the algorithm will keep Sy fixed.
For telecentric lenses, it is impossible to determine Magnification, Sx , and Sy simulta-
neously. Therefore, the algorithm will keep Sy fixed.
For image-side telecentric tilt lenses (see chapter “Basics”, section “Camera Model and Pa-
rameters” in the “Solution Guide III-C 3D Vision” for an overview of different
types of tilt lenses), it is impossible to determine Focus, Sx , Sy , and the tilt parameters tilt
and rot simultaneously. Therefore, additionally to Sy , the algorithm will keep Sx fixed.
For bilateral telecentric tilt lenses, it is impossible to determine Magnification, Sx ,
Sy , and the tilt parameters tilt and rot simultaneously. Therefore, additionally to Sy , the
algorithm will keep Sx fixed.
Cx, Cy: Column (Cx ) and row (Cy ) coordinate of the principal point of the image (center of the
radial distortion).
Use the half image width and height as initial values. Attention: These values decrease if the
image is subsampled!
ImageWidth, Image Height: Width and height of the sampled image. Attention: These values
decrease if the image is subsampled!
Line scan cameras have 12 or 16 internal parameters depending on the camera type.
For reasons explained below, parameters that are marked with an * asterisk are fixed and not estimated
by the algorithm.
Line scan cameras with regular lenses
Projective line scan cameras with regular lenses
• ’line_scan_division’:
[’line_scan_division’, Focus, Kappa, Sx*, Sy*, Cx, Cy, ImageWidth, ImageHeight, Vx, Vy, Vz]
• ’line_scan_polynomial’:
[’line_scan_polynomial’, Focus, K1, K2, K3, P1, P2, Sx*, Sy*, Cx, Cy, ImageWidth, Image-
Height, Vx, Vy, Vz]

HALCON/HDevelop Reference Manual, 2024-11-13


377

Telecentric line scan cameras with regular lenses


• ’line_scan_telecentric_division’:
[’line_scan_telecentric_division’, Magnification, Kappa, Sx*, Sy*, Cx, Cy, ImageWidth, Im-
ageHeight, Vx, Vy, Vz*]
• ’line_scan_telecentric_polynomial’:
[’line_scan_telecentric_polynomail’, Magnification, K1, K2, K3, P1, P2, Sx*, Sy*, Cx, Cy,
ImageWidth, ImageHeight, Vx, Vy, Vz*]
Description of the internal camera parameters of line scan cameras:

CameraType: Type of the camera, as listed above.


Focus: Focal length of the lens (only for lenses that perform a perspective projection on the object
side of the lens).
The initial value is the nominal focal length of the used lens, e.g., 0.008m.
Magnification: Magnification of the lens (only for lenses that perform a telecentric projection on
the object side of the lens).
The initial value is the nominal magnification of the used telecentric lens (the image size
divided by the object size), e.g., 0.2.
Kappa (κ): Distortion coefficient of the division model to model the radial lens distortions.
Use 0.0 m−2 as initial value.
K1, K2, K3, P1, P2: Distortion coefficients to model the radial (K1 , K2 , K3 ) and decentering
(P1 , P2 ) lens distortions (only for the polynomial model).
Use 0.0 as initial value for all five coefficients.
Sx: Scale factor. Corresponds to the horizontal distance between two neighboring cells on the
sensor. Note that Focus or Magnification, respectively, and Sx cannot be determined
simultaneously. Therefore, Sx is kept fixed in the calibration. The initial value for Sx can be
taken from the technical specifications of the camera. Attention: This value increases if the
image is subsampled!
Sy: Scale factor. During the calibration, it appears only in the form pv = −Sy · Cy . Consequently,
Sy and Cy cannot be determined simultaneously. Therefore, in the calibration, Sy is kept
fixed. pv describes the distance of the image center point from the sensor line in meters. The
initial value for Sy can be taken from the technical specifications of the camera. Attention:
This value increases if the image is subsampled!
Cx: Column coordinate of the image center point (center of the radial distortions). Use half of
the image width as the initial value for Cx . Attention: This value decreases if the image is
subsampled!
Cy: Distance of the image center point (center of the radial distortions) from the sensor line in
scanlines. The initial value can normally be set to 0.
ImageWidth, ImageHeight: Width and height of the sampled image. Attention: These values
decrease if the image is subsampled!
Vx, Vy, Vz: X-, Y-, and Z-component of the motion vector.
The initial values for the x-, y-, and z-component of the motion vector depend on the image
acquisition setup. Assuming a camera that looks perpendicularly onto a conveyor belt and
that is rotated around its optical axis such that the sensor line is perpendicular to the conveyor
belt, i.e., the y-axis of the camera coordinate system is parallel to the conveyor belt, use the
initial values Vx = Vz = 0. The initial value for Vy can then be determined, e.g., from a line
scan image of an object with known size (e.g., calibration plate, ruler):

l[m]
Vy =
l[row]
With

l[m] = Length of the object in object coordinates [meter]


l[row] = Length of the object in image coordinates [rows]
If, compared to the above setup, the camera is rotated 30 degrees around its optical axis, i.e.,
around the z-axis of the camera coordinate system, the above determined initial values must
be changed as follows:

HALCON 24.11.1.0
378 CHAPTER 6 CALIBRATION

Vxz = sin(30◦ )Vy


Vyz = cos(30◦ )Vy
Vzz = Vz = 0

If, compared to the first setup, the camera is rotated -20 degrees around the x-axis of the
camera coordinate system, the following initial values result:

Vxx = Vx = 0
Vyx = cos(−20◦ )Vy
Vzx = sin(−20◦ )Vy

The quality of the initial values for Vx , Vy , and Vz are crucial for the success of the whole
calibration. If they are not precise enough, the calibration may fail.
Note that for telecentric line scan cameras, the value of Vz has no influence on the image
position of 3D points and therefore cannot be determined. Consequently, Vz is not optimized
and left at its initial value for telecentric line scan cameras. Therefore, the initial value of Vz
should be set to 0. For setups with multiple telecentric line scan cameras that share a common
motion vector (for a detailed explanation, see Calibration / Multi-View), however, Vz can be
determined based on the camera poses. Therefore, in this case Vz is optimized.
Restrictions for internal camera parameters Note that the term focal length is not quite correct and
would be appropriate only for an infinite object distance. To simplify matters, always the term fo-
cal length is used even if the image distance is meant.
For all operators that use camera parameters as input the respective parameter values are checked as to
whether they fulfill the following restrictions:

Sx > 0
Sy ≥ 0
F ocus > 0
M agnif ication > 0
ImageW idth > 0
ImageHeight > 0
ImageP laneDist > 0
0 ≤ tilt < 90
0≤ rot < 360
Vx2 + Vy2 + Vz2 6= 0

For some operators the restrictions differ slightly. In particular, for operators that do not support line
scan cameras the following restriction applies:

Sy > 0

External camera parameters: The following 6 parameters describe the 3D pose, i.e., the position and orientation
of the world coordinate system relative to the camera coordinate system. The x- and y-axis of the camera
coordinate system are parallel to the column and row axes of the image, while the z-axis is perpendicular
to the image plane. For line scan cameras, the pose of the world coordinate system refers to the camera
coordinate system of the first image line.

TransX: Translation along the x-axis of the camera coordinate system.


TransY: Translation along the y-axis of the camera coordinate system.
TransZ: Translation along the z-axis of the camera coordinate system.

HALCON/HDevelop Reference Manual, 2024-11-13


379

RotX: Rotation around the x-axis of the camera coordinate system.


RotY: Rotation around the y-axis of the camera coordinate system.
RotZ: Rotation around the z-axis of the camera coordinate system.

The pose tuple contains one more element, which is the representation type of the pose. It codes the com-
bination of the parameters OrderOfTransform, OrderOfRotation, and ViewOfTransform. See
create_pose for more information about 3D poses.
When using a standard HALCON calibration plate, the world coordinate system is defined by the coordinate
system of the calibration plate. See the section “Calibration Object” above for further information.
If a HALCON calibration plate is used, you can use the operator find_calib_object to determine
initial values for all parameters. Using HALCON calibration plates with rectangularly arranged marks,
a combination of the two operators find_caltab and find_marks_and_pose will have the same
effect.
Parameter units: HALCON calibration plates use meters as unit. The camera parameters use corresponding
units. Of course, calibration can be done using different units, but in this case the related parameters have to
be adapted. Here, we list the HALCON default units for the different camera parameters:

Parameter Unit
External RotX, RotY, RotZ deg, deg, deg
TransX, TransY, TransZ m, m, m
Internal Cx, Cy px, px
Focus m
ImagePlaneDist m
ImageWidth, ImageHeight px, px
K1, K2, K3 m−2 , m−4 , m−6
Kappa (κ) m−2
P1, P2 m−1 , m−1
Magnification - (scalar)
Sx, Sy m/px, m/px
Tilt, Rot deg, deg
Vx, Vy, Vz m/scanline, m/scanline, m/scanline

Additional Information about the Calibration Process


The use of calibrate_cameras leads to some questions, which are addressed in the following sections:

How to obtain an appropriate calibration plate? You can obtain high-precision calibration plates in various
sizes and materials from your local distributor. These calibration plates come with associated description
files and can be easily extracted with find_calib_object.
It is also possible to use any arbitrary object for calibration. The only requirement is that the object has
characteristic points that can be robustly detected in the image and that the 3D world position of these points
is known with high accuracy. See the “Solution Guide III-C 3D Vision” for details.
Self-printed calibration objects are usually not accurate enough for high-precision applications.
How to take a set of suitable images? With the combination of lens (fixed focus setting!), camera, and frame
grabber to be calibrated, a set of images of the calibration plate must be taken (see open_framegrabber
and grab_image).
Your local distributor can provide you with two different types of standard HALCON calibration plates:
Calibration plates with hexagonally arranged marks (see create_caltab) and calibration plates with
rectangularly arranged marks (see gen_caltab). Since these two calibration plates substantially differ
from each other, in some cases additional particularities apply (see below).
The parameters and hints listed below should be considered when taking the calibration images. For a
successful calibration, the setup and the used set of images should have certain qualities. These qualities
may vary for the specific task and demand. In order to give guidance, values and hints suitable for a basic
monocular camera setup are mentioned.

HALCON 24.11.1.0
380 CHAPTER 6 CALIBRATION

Regarding the camera setup:

• Aperture
The aperture of the camera must not be changed during the acquisition of the images. If the
aperture is changed after the calibration, the camera must be calibrated anew.
• Camera pose
The position of the camera must not be changed during the image acquisition.
• Focus
The calibration images should be sharply focused, i.e., transitions between objects should be
clearly delimited. The focus, respectively the focal length, must not be changed during the image
acquisition.

Regarding the placement of the calibration plates:

• Field of view coverage and orientation


Within the set of calibration images, every part of the field of view should be covered by the plate
at least once. The calibration plate may also fill the entire image. The orientation of the plate
should vary within the set of images.
• Tilt angles
The set of calibration images should also contain images with tilted calibration plates. Thereby
the plate should be tilted in different directions at an angle of about 30-45°. Note, that if the
recommended angle cannot be realized due to, e.g., limited depth of field, you should at least tilt
your plate as steeply as possible with your setup.
• Number of images/ calibration plate poses
– Plate with hexagonally arranged marks: At least 6 images (Not all, but at least 4 of them with
tilted calibration plates).
– Plate with rectangularly arranged marks: At least 15 images.
Nevertheless, you must make sure that the poses of the calibration plates also fulfill the other
requirements.
• Inverted acquisition of the calibration plate
The calibration marks must not be acquired inverted. This can, e.g., happen if a calibration plate
made of glass is acquired from its backside or if a line scan camera is not moving downwards with
respect to the image coordinate system (i.e., Vy is negative).

Regarding image properties and content:

• Pattern coverage
How much of the calibration pattern must at least be contained in the images depends on the used
plate.
– Plate with hexagonally arranged marks: At least one finder pattern needs to be visible. If at
least two finder patterns are visible in the image, it is possible to detect whether the calibration
plate is mirrored or not. In a mirrored case, a suitable error will be returned.
– Plate with rectangularly arranged marks: Plate needs to be completely visible, as the finder
pattern is the frame surrounding the point marks.
Nevertheless, of course, the more of the calibration pattern is visible to the camera and the more
of the field of view is filled by the calibration plate, the better.
• Mark diameter
The marks of the calibration plates should have a diameter of at least 20 pixels in each image. This
requirement is essential for a successful calibration.
• Contrast
The contrast between the light and dark areas of the calibration plate should be at least 100 gray
values (regarding byte images).

HALCON/HDevelop Reference Manual, 2024-11-13


381

• Overexposure
To avoid overexposed images, make sure that gray values of the light parts of the calibration plate
do not exceed 240 (regarding byte images), especially not in the neighborhood of the calibration
marks.
• Homogeneity
The calibration plate should be illuminated homogeneously and reflections should be avoided.
As a rule of thumb, the range of gray values of the light parts of the plate should not exceed 45
(regarding byte images).

Regarding image format and preprocessing:

• Image format
Calibration images should be saved in an uncompressed format. Compression artifacts which
occur, e.g., when using JPG format and high compression rates need to be avoided.
• Preprocessing
Calibration images should not be preprocessed. If image properties like contrast or focus are
insufficient (see above), the issues need to be resolved by adjusting the camera setup instead of
processing the images ahead of the calibration.

Which distortion model should be used? Two distortion models can be used: The division model and the poly-
nomial model. The division model uses one parameter to model the radial distortions while the polynomial
model uses five parameters to model radial and decentering distortions (see the sections “Camera parame-
ters” and “The Used 3D camera model”).
The advantages of the division model are that the distortions can be applied faster, especially the inverse
distortions, i.e., if world coordinates are projected into the image plane. Furthermore, if only few calibration
images are used or if the field of view is not covered sufficiently, the division model typically yields more
stable results than the polynomial model. The main advantage of the polynomial model is that it can model
the distortions more accurately because it uses higher order terms to model the radial distortions and because
it also models the decentering distortions. Note that the polynomial model cannot be inverted analytically.
Therefore, the inverse distortions must be calculated iteratively, which is slower than the calculation of the
inverse distortions with the (analytically invertible) division model.
Typically, the division model should be used for the calibration. If the accuracy of the calibration is not
high enough, the polynomial model can be used. Note, however, that the calibration sequence used for
the polynomial model must provide an even better coverage of the area in which measurements will later
be performed. The distortions may be modeled inaccurately outside of the area that was covered by the
calibration plate. This holds for the image border as well as for areas inside the field of view that were not
covered by the calibration plate.

The Used 3D Camera Model


In general, camera calibration means the exact determination of the parameters that model the (optical) projection
of any 3D world point pw into a (sub-)pixel (r, c) in the image. This is important if the original 3D pose of an
object must be computed from the image (e.g., for measuring industrial parts). The appropriate projection model
depends on the camera type used in your setup.
For the modeling of this projection process, which is determined by the used combination of camera, lens, and
frame grabber, HALCON provides the following 3D camera models:

Area scan pinhole camera: The combination of an area scan camera with a lens that effects a perspective projec-
tion on the object side of the lens and that may show radial and decentering distortions. The lens may be a
tilt lens, i.e., the optical axis of the lens may be tilted with respect to the camera’s sensor (this is sometimes
called a Scheimpflug lens). Since hypercentric lenses also perform a perspective projection, cameras with
hypercentric lenses are pinhole cameras. The models for regular (i.e., non-tilt) pinhole and image-side tele-
centric lenses are identical. In contrast, the models for pinhole and image-side telecentric tilt lenses differ
substantially, as described below.

HALCON 24.11.1.0
382 CHAPTER 6 CALIBRATION

Area scan telecentric camera: The combination of an area scan camera with a lens that is telecentric on the
object-side of the lens, i.e., that effects a parallel projection on the object-side of the lens, and that may
show radial and decentering distortions. The lens may be a tilt lens. The models for regular (i.e., non-tilt)
bilateral and object-side telecentric lenses are identical. In contrast, the models for bilateral and object-side
telecentric tilt lenses differ substantially, as described below.
Line scan pinhole camera: The combination of a line scan camera with a lens that effects a perspective projection
and that may show radial distortions. Tilt lenses are currently not supported for line scan cameras.
Line scan telecentric camera: The combination of a line scan camera with a lens that effects a telecentric pro-
jection and that may show radial distortions. Tilt lenses are currently not supported for line scan cameras.

To transform a 3D point pw = (xw , yw , zw )T which is given in world coordinates, into a 2D point qi = (r, c)T ,
which is given in pixel coordinates, a chain of transformations is needed:

pw → pc → qc → q̃c → qt → qi
 

pw 3D world point
pc Transformed into camera coordinate system
qc Projected into image plane (2D point, still in metric coordinates)
q̃c Lens distortion applied
qt If a tilted lens is used, the point q̃c is projected on the point qt in the tilted image plane. In this
case the distorted point q̃c only lies on a virtual image plane of a system without tilt.
qi Pixel coordinates

The following paragraphs describe these steps in more detail for area scan cameras and subsequently for line scan
cameras. For a even more detailed description of the different 3D camera models as well as some explanatory dia-
grams please refer to the chapter “Basics”, section “Camera Model and Parameters” in the “Solution Guide
III-C 3D Vision”.

Transformation step 1: pw → pc The point pw is transformed from world into camera coordinates (points as
homogeneous vectors, compare affine_trans_point_3d) by :

xc
 

pc  yc 
     w 
R T p
=  c  =
  ·
1 z 000 1 1
1

with R and T being the rotation and translation matrices (refer to the chapter “Basics”, section “3D Trans-
formations and Poses‘” in the “Solution Guide III-C 3D Vision” for detailed information).
Transformation step 2: pc → qc If the underlying camera model is an area scan pinhole camera, the projection
of pc = (xc , yc , zc )T into the image plane is described by the following equation:

f xc
   
u
qc = = c
v z yc

where f = Focus. For cameras with hypercentric lenses, the following equation holds instead:

−f xc
   
c u
q = = c
v z yc

If an area scan telecentric camera is used, the corresponding equation is:


   c
c u x
q = =m c
v y

where m = Magnification.

HALCON/HDevelop Reference Manual, 2024-11-13


383

Transformation step 3: qc → q̃c For all types of cameras, the lens distortions can be modeled either by the
division model or by the polynomial model.
The division model uses one parameter Kappa to model the radial distortions.
The following equations transform the distorted image plane coordinates into undistorted image plane coor-
dinates if the division model is used:
   
u 1 ũ
=
v 1 + κ(ũ2 + ṽ 2 ) ṽ

These equations can be inverted analytically, which leads to the following equations that transform undis-
torted coordinates into distorted coordinates:
   
ũ 2 u
q̃c = =
ṽ v
p
1 + 1 − 4κ(u2 + v 2 )

The polynomial model uses three parameters (K1 , K2 , K3 ) to model the radial distortions and two param-
eters (P1 , P2 ) to model the decentering distortions.
The following equations transform the distorted image plane coordinates into undistorted image plane coor-
dinates if the polynomial model is used:

ũ + ũ(K1 r2 + K2 r4 + K3 r6 ) + P1 (r2 + 2ũ2 ) + 2P2 ũṽ


   
u
=
v ṽ + ṽ(K1 r2 + K2 r4 + K3 r6 ) + 2P1 ũṽ + P2 (r2 + 2ṽ 2 )

with r = ũ2 + ṽ 2
These equations cannot be inverted analytically. Therefore, distorted image plane coordinates must be cal-
culated from undistorted image plane coordinates numerically.
Additional transformation step for tilt lenses: q̃c → qt If the camera lens is a tilt lens, the tilt of the lens with
respect to the image plane is described by the rotation angle rot and the tilt angle tilt.
In this step you have to further distinguish between different types of tilt lenses as described below.
See chapter “Basics”, section “Camera Model and Parameters” in the “Solution Guide III-C 3D
Vision” for an overview of different types of tilt lenses.
For projective tilt lenses and object-side telecentric tilt lenses (which perform a perspective projection on
the image side of the lens) the projection of q̃c = (ũ, ṽ)T into the point qt = (û, v̂)T , which lies in the tilted
image plane, is described by a projective 2D transformation, i.e., by the homogeneous 3 × 3 matrix H (see
projective_trans_point_2d):

qt q̃c
   
t = H·
qw 1

t
where qw is the additional coordinate from the projective transformation of a homogeneous point.
   
h11 h12 h13 q11 q33 − q13 q31 q21 q33 − q23 q31 0
H =  h21 h22 h23  =  q12 q33 − q13 q32 q22 q33 − q23 q32 0 
h31 h32 h33 q13 /d q23 /d q33

where d = ImagePlaneDist and

 
q11 q12 q13
Q =  q21 q22 q23 
q31 q32 q33
(cos ρ)2 (1 − cos τ ) + cos τ
 
cos ρ sin ρ(1 − cos τ ) sin ρ sin τ
=  cos ρ sin ρ(1 − cos τ ) (sin ρ)2 (1 − cos τ ) + cos τ − cos ρ sin τ 
− sin ρ sin τ cos ρ sin τ cos τ

with ρ = rot and τ = tilt.

HALCON 24.11.1.0
384 CHAPTER 6 CALIBRATION

For image-side telecentric tilt lenses and bilateral telecentric tilt lenses (which perform a parallel projec-
tion on the image side of the lens), the projection onto the tilted image plane is described by a linear 2D
transformation, i.e., by a 2 × 2 matrix:
   
h11 h12 1 q22 −q12
H= =
h21 h22 q11 q22 − q12 q21 −q21 q11

where Q is defined as above for projective lenses.


Transformation step 4: qt → qi / q̃c → qi Finally, the point q̃c = (ũ, ṽ)T (or qt if a tilt lens is present) is
transformed from the image plane coordinate system into the image coordinate system (the pixel coordinate
system):

ṽ + C
 
 
r S y
qi = = y 
c ũ + C
Sx x

For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:

1. The camera moves with constant velocity along a straight line.

2. The orientation of the camera is constant.


3. The motion is equal for all images.

The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/row] in the camera
coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact, this is
equivalent to the assumption of a fixed camera with the object traveling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system
is the center of projection (for pinhole cameras) or the center of distortion (for telecentric cameras), respectively.
The z-axis is identical to the optical axis and directed so that the visible points have positive z coordinates. The
y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector has a positive
y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a right-handed
coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means there would
be an individual pose for each image line. To make things easier, in HALCON all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is computed by the operator find_calib_object (and stored by calibrate_cameras in
the calibration results).
For line scan cameras, the transformation from world to camera coordinates (pw → pc ) works in the same way.
Therefore, you can also apply transformation step 1 as described for area scan cameras above.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into
(sub-)pixel coordinates (r, c) in the image is modeled as follows:
Assuming


x
pc =  y  ,
z

the following set of equations must be solved for m, ũ, and t:

HALCON/HDevelop Reference Manual, 2024-11-13


385

m · u(ũ, pv ) = x − t · Vx
m · v(ũ, pv ) = y − t · Vy
m · Focus = z − t · Vz

were u(ũ, ṽ) and v(ũ, ṽ) are the undistortion functions that are described above for area scan cameras and pv =
−Sy · Cy .
For line scan telecentric cameras, the following set of equations must be solved for ũ and t:

u(ũ, pv )/Magnification = x − t · Vx
v(ũ, pv )/Magnification = y − t · Vy

with u(ũ, ṽ), v(ũ, ṽ) and pv as defined above. Note that neither z nor Vz influence the projection for telecentric
cameras.
The above formulas already include the compensation for image distortions.
Finally, the point is transformed into the image coordinate system, i.e., the pixel coordinate system:
   
i r t
q = = ũ .
c Sx + Cx

Further Limitations Related to Specific Camera Types


For pinhole cameras, if the calibration plates are parallel to each other in all images (in particular, if they all lie in
the same plane), it is impossible to determine Focus together with all six of the external camera parameters. For
example, it is impossible to determine Focus and the distance of the calibration plates to the camera in this case.
To be able to calibrate all camera parameters uniquely, make sure that you acquire images of the calibration plate
tilted in different orientations.
For telecentric lenses, the distance of the calibration plate from the camera cannot be determined. Therefore,
the z-component of the resulting calibration plate pose is set to 1 m in the calibration results. Furthermore, as
described previously, for telecentric line scan cameras, Vz cannot be determined and is left at its initial value,
except for multi-camera setups that have a common motion vector, in which case Vz can be determined.
For tilt lenses, the greater the lens distortion is, the more accurately the tilt can be determined. For lenses with small
distortions, the tilt cannot be determined robustly. Therefore, the optimized tilt parameters may differ significantly
from the nominal tilt parameters of the setup. If this is the case, please check Error. If Error is small, the
resulting camera parameters describe the imaging geometry consistently within the calibrated volume and can be
used for accurate measurements.
For perspective tilt lenses and object-side telecentric tilt lenses, the image plane distance can only be deter-
mined uniquely if the tilt is not 0 degrees. The smaller the tilt, the less accurately the image plane distance can
be determined. Therefore, the optimized image plane distance may differ significantly from the nominal image
plane distance of the setup. If this is the case, please check Error. If Error is small, the resulting camera
parameters describe the imaging geometry consistently within the calibrated volume and can be used for accurate
measurements.
For perspective tilt lenses and object-side telecentric tilt lenses that are tilted around the horizontal or vertical
axis, i.e., for which the rotation angle is 0, 90, 180, or 270 degrees, the tilt angle tilt, scale factor Sx , the focal
length f (for perspective tilt lenses) or the magnification m (for object-side telecentric tilt lenses), and the distance
of the tilted image plane from the perspective projection center d cannot be determined uniquely. In this case, Sx
should be excluded from the optimization by calling
set_calib_data(CalibDataID, ’camera’, ’general’, ’excluded_settings’, ’sx’).
Additionally, note that for tilt lenses it is only possible to determine tilt and rot simultaneously. This is an
implementation choice that makes the optimization numerically more robust. Consequently, the parameters tilt
and rot are excluded simultaneously from the optimization by calling

HALCON 24.11.1.0
386 CHAPTER 6 CALIBRATION

set_calib_data(CalibDataID, ’camera’, ’general’, ’excluded_settings’,


’tilt’).
Pinhole cameras with tilt lenses with large focal lengths have nearly telecentric projection characteristics. There-
fore, as described before, Sx and the tilt parameters tilt and rot are correlated and can only be determined impre-
cisely simultaneously. In this case, it is again advisable to exclude Sx from the optimization.
For telecentric lenses, there are always two possible poses of a calibration plate for a single image. Therefore, it
is not possible to decide which one of the two poses is actually present in the image. This ambiguity also effects
the tilt parameters tilt and rot of a telecentric tilt lens. Consequently, depending on the initial parameters for tilt
and rot the camera calibration may return the alternative parameters instead of the nominal ones. If this is the
case, please check Error. If Error is small, the resulting camera parameters describe the imaging geometry
consistently within the calibrated volume and can be used for accurate measurements.
For line scan cameras with the polynomial distortion model (for cameras with perspective as well as telecentric
lenses), the parameters P1 and P2 are highly correlated with other parameters in the camera model. Therefore,
they typically cannot be determined reliably and should be excluded from the calibration by calling
set_calib_data(CalibDataID, ’camera’, ’general’, ’excluded_settings’,
’poly_tan_2’).
Further Information
Learn about camera calibration any many other topics in interactive online courses at our MVTec Academy .

6.1 Binocular

binocular_calibration ( : : NX, NY, NZ, NRow1, NCol1, NRow2,


NCol2, StartCamParam1, StartCamParam2, NStartPose1, NStartPose2,
EstimateParams : CamParam1, CamParam2, NFinalPose1, NFinalPose2,
RelPose, Errors )

Determine all camera parameters of a binocular stereo system.


In general, binocular calibration means the exact determination of the parameters that model the 3D reconstruction
of a 3D point from the corresponding images of this point in a binocular stereo system. This reconstruction
is specified by the internal parameters CamParam1 of camera 1 and CamParam2 of camera 2 describing the
underlying camera model, and the external parameters RelPose describing the relative pose of camera system 2
in relation to camera system 1.
Thus, known 3D model points (with coordinates NX, NY, NZ) are projected in the image planes of both cameras
(camera 1 and camera 2) and the sum of the squared distances between these projections and the corresponding
measured image points (with coordinates NRow1, NCol1 for camera 1 and NRow2, NCol2 for camera 2) is
minimized. It should be noted that all these model points must be visible in both images. The used camera model
is described in Calibration. The camera model is represented (for each camera separately) by a tuple of 9 to 16
parameters that correspond to perspective or telecentric area scan or telecentric line scan cameras (see Calibration).
The projection uses the initial values StartCamParam1 and StartCamParam2 of the internal parameters of
camera 1 and camera 2, which can be obtained from the camera data sheets. In addition, the initial guesses
NStartPose1 and NStartPose2 of the poses of the 3D calibration model in relative to the camera coordinate
systems (ccs) of camera 1 and camera 2 are needed as well. These poses are expected in the form ccs Pwcs , where
wcs denotes the world coordinate system (see Transformations / Poses and “Solution Guide III-C - 3D
Vision”). They can be determined by the operator find_marks_and_pose. Since this calibration algorithm
simultaneously handles correspondences between measured image and known model points from different image
pairs, poses (NStartPose1,NStartPose2), and measured points (NRow1,NCol1,NRow2, NCol2) must be
passed concatenated in a corresponding order.
The input parameter EstimateParams is used to select the parameters to be estimated. Usually this parameter
is set to ’all’, i.e., all external camera parameters (translation and rotation) and all internal camera parameters are
determined. Otherwise, EstimateParams contains a tuple of strings indicating the combination of parameters
to estimate. For instance, if the internal camera parameters already have been determined (e.g., by previous calls
to binocular_calibration), it is often desired to only determine relative the pose of the two cameras to
each other (RelPose). In this case, EstimateParams can be set to ’pose_rel’. The internal parameters can be

HALCON/HDevelop Reference Manual, 2024-11-13


6.1. BINOCULAR 387

subsumed by the parameter values ’cam_param1’ and ’cam_param2’ as well. Note that if the polynomial model is
used to model the lens distortions, the values ’k1_i’, ’k2_i’ and ’k3_i’ can be specified individually, whereas ’p1’
and ’p2’ can only be specified in the group ’poly_tan_2_i’ (with ’i’ indicating the index of the camera). ’poly_i’
specifies the group ’k1_i’, ’k2_i’, ’k3_i’ and ’poly_tan_2_i’.
The following list contains all possible strings that can be passed to the tuple:

Allowed strings for EstimateParams Determined parameters


’all’ (default) All internal camera parameters, as well as
the relative pose of both cameras and the
poses of the calibration objects.
’pose’ Relative pose between the two cameras and
poses of the calibration objects.
’pose_rel’ Relative pose between the two cameras.
’alpha_rel’, ’beta_rel’, ’gamma_rel’, ’transx_rel’, ’transy_rel’, Rotation angles and translation parameters
’transz_rel’ of the relative pose between the two cam-
eras.
’pose_caltabs’ Poses of the calibration objects.
’alpha_caltabs’, ’beta_caltabs’, ’gamma_caltabs’, Rotation angles and translation parameters
’transx_caltabs’, ’transy_caltabs’, ’transz_caltabs’ of the relative poses of the calibration ob-
jects.
’cam_param1’, ’cam_param2’ All internal camera parameters of camera 1
and camera 2, respectively.
’focus1’, ’magnification1’, ’kappa1’, ’poly_1’, ’k1_1’, ’k2_1’, Individual internal camera parameters of
’k3_1’, ’poly_tan_2_1’, ’image_plane_dist1’, ’tilt1’, ’cx1’, camera 1 and camera 2, respectively.
’cy1’, ’sx1’, ’sy1’, ’focus2’, ’magnification2’, ’kappa2’, ’poly_2’,
’k1_2’, ’k2_2’, ’k3_2’, ’poly_tan_2_2’, ’image_plane_dist2’,
’tilt2’, ’cx2’, ’cy2’, ’sx2’, ’sy2’
’common_motion_vector’ Determines whether two line scan cameras
have a common motion vector. This is
the case if the two cameras are mounted
rigidly and the object is moved linearly in
front of the cameras or if the two rigidly
mounted cameras are moved by the same
linear actuator. This is assumed to be the
default. Therefore, you only need to set
’~common_motion_vector’ if the cameras
are moving independently in different di-
rections.

In addition, parameters can be excluded from estimation by using the prefix ’~’. For example, the values
[’pose_rel’,’~transx_rel’] have the same effect as [’alpha_rel’,’beta_rel’,’gamma_rel’,’transy_rel’,’transz_rel’].
On the other hand, [’all’,’~focus1’] determines all internal and external parameters except the focus of camera
1, for instance. The prefix ’~’ can be used with all parameter values except ’all’.
The underlying camera model is explained in the chapter Calibration. The calibrated internal camera parameters
are returned in CamParam1 for camera 1 and in CamParam2 for camera 2.
The external parameters are returned analogously to camera_calibration, the 3D transformation poses
of the calibration model to the respective camera coordinate system (ccs) are returned in NFinalPose1 and
NFinalPose2. Thus, the poses are in the form ccs Pwcs , where wcs denotes the world coordinate system of the
3D calibration model (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). The
relative pose ccs1 Pccs2 , RelPose, specifies the transformation of points in ccs2 into ccs1. Therewith, the final
poses are related with each other (neglecting differences due to the balancing effects of the multi image calibration)
by:
HomMat3D_NFinalPose2 = INV(HomMat3D_RelPose) * HomMat3D_NFinalPose1,
where HomMat3D_* denotes a homogeneous transformation matrix of the respective poses and INV() inverts a
homogeneous matrix.

HALCON 24.11.1.0
388 CHAPTER 6 CALIBRATION

The computed average errors returned in Errors give an impression of the accuracy of the calibration. Using
the determined camera parameters, they denote the average euclidean distance between the projection of the mark
centers to their extracted image coordinates.
For cameras with telecentric lenses, additional conditions must be fulfilled for the setup. They can be found in the
chapter Calibration.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported. Fur-
thermore, stereo setups that contain area scan and line scan cameras at the same time are not supported.
Parameters
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
Number of elements: NY == NX
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
Number of elements: NZ == NX
. NRow1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NCol1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
Number of elements: NCol1 == NRow1
. NRow2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
Number of elements: NRow2 == NRow1
. NCol2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
Number of elements: NCol2 == NRow1
. StartCamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal parameters of camera 1.
. StartCamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal parameters of camera 2.
. NStartPose1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
Number of elements: NStartPose1 == 7 * NRow1 / NX
. NStartPose2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
Number of elements: NStartPose2 == 7 * NRow1 / NX
. EstimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Camera parameters to be estimated.
Default: ’all’
List of values: EstimateParams ∈ {’all’, ’pose’, ’pose_caltabs’, ’pose_rel’, ’cam_param1’,
’cam_param2’, ’alpha_rel’, ’beta_rel’, ’gamma_rel’, ’transx_rel’, ’transy_rel’, ’transz_rel’, ’alpha_caltabs’,
’beta_caltabs’, ’gamma_caltabs’, ’transx_caltabs’, ’transy_caltabs’, ’transz_caltabs’, ’focus1’,
’magnification1’, ’kappa1’, ’poly_1’, ’k1_1’, ’k2_1’, ’k3_1’, ’poly_tan_2_1’, ’image_plane_dist1’, ’tilt1’,
’cx1’, ’cy1’, ’sx1’, ’sy1’, ’focus2’, ’magnification2’, ’kappa2’, ’poly_2’, ’k1_2’, ’k2_2’, ’k3_2’,
’poly_tan_2_2’, ’image_plane_dist2’, ’tilt2’, ’cx2’, ’cy2’, ’sx2’, ’sy2’, ’common_motion_vector’}
. CamParam1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .campar ; real / integer / string
Internal parameters of camera 1.
. CamParam2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .campar ; real / integer / string
Internal parameters of camera 2.
. NFinalPose1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all poses of the calibration model in relation to camera 1.
Number of elements: NFinalPose1 == 7 * NRow1 / NX
. NFinalPose2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all poses of the calibration model in relation to camera 2.
Number of elements: NFinalPose2 == 7 * NRow1 / NX

HALCON/HDevelop Reference Manual, 2024-11-13


6.1. BINOCULAR 389

. RelPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer


Pose of camera 2 in relation to camera 1.
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Average error distances in pixels.
Example

* Open image source.


open_framegrabber ('File', 1, 1, 0, 0, 0, 0, 'default', -1, 'default', -1, \
'default', 'images_l.seq', 'default', 0, -1, AcqHandle1)
open_framegrabber ('File', 1, 1, 0, 0, 0, 0, 'default', -1, 'default', -1, \
'default', 'images_r.seq', 'default', 1, -1, AcqHandle2)

* Initialize the start parameters.


caltab_points ('caltab_30mm.descr', X, Y, Z)
StartCamParam1 := ['area_scan_division', 0.0125, 0, 7.4e-6, 7.4e-6, \
Width/2.0, Height/2.0, Width, Height]
StartCamParam2 := StartCamParam1
Rows1 := []
Cols1 := []
StartPoses1 := []
Rows2 := []
Cols2 := []
StartPoses2 := []

* Find calibration marks and startposes.


for i := 0 to 11 by 1
grab_image_async (Image1, AcqHandle1, -1)
grab_image_async (Image2, AcqHandle2, -1)
find_caltab (Image1, CalPlate1, 'caltab_30mm.descr', 3, 120, 5)
find_caltab (Image2, CalPlate2, 'caltab_30mm.descr', 3, 120, 5)
find_marks_and_pose (Image1, CalPlate1, 'caltab_30mm.descr', \
StartCamParam1, 128, 10, 20, 0.7, 5, 100, \
RCoord1, CCoord1, StartPose1)
Rows1 := [Rows1,RCoord1]
Cols1 := [Cols1,CCoord1]
StartPoses1 := [StartPoses1,StartPose1]
find_marks_and_pose (Image2, CalPlate2, 'caltab_30mm.descr', \
StartCamParam2, 128, 10, 20, 0.7, 5, 100, \
RCoord2, CCoord2, StartPose2)
Rows2 := [Rows2,RCoord2]
Cols2 := [Cols2,CCoord2]
StartPoses2 := [StartPoses2,StartPose2]
endfor

* Calibrate the stereo rig.


binocular_calibration (X, Y, Z, Rows1, Cols1, Rows2, Cols2, StartCamParam1, \
StartCamParam2, StartPoses1, StartPoses2, 'all', \
CamParam1, CamParam2, NFinalPose1, NFinalPose2, \
RelPose, Errors)
* Archive the results.
write_cam_par (CamParam1, 'cam_left-125.dat')
write_cam_par (CamParam2, 'cam_right-125.dat')
write_pose (RelPose, 'rel_pose.dat')

* Rectify the stereo images.


gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, \
RelPose, 1, 'viewing_direction', 'bilinear', \
CamParamRect1, CamParamRect2, \
CamPoseRect1, CamPoseRect2, \

HALCON 24.11.1.0
390 CHAPTER 6 CALIBRATION

RelPoseRect)
map_image (Image1, Map1, ImageMapped1)
map_image (Image2, Map2, ImageMapped2)

Result
binocular_calibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired pa-
rameters have been determined by the minimization algorithm. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, write_cam_par, pose_to_hom_mat3d, disp_caltab,
gen_binocular_rectification_map
See also
find_caltab, sim_caltab, read_cam_par, create_pose, convert_pose_type, read_pose,
hom_mat3d_to_pose, create_caltab, binocular_disparity, binocular_distance
Module
3D Metrology

6.2 Calibration Object

caltab_points ( : : CalPlateDescr : X, Y, Z )

Read the mark center points from the calibration plate description file.
caltab_points reads the mark center points from the calibration plate description file CalPlateDescr (see
gen_caltab for calibration plates with rectangularly arranged marks and create_caltab for calibration
plates with hexagonally arranged marks) and returns their coordinates in X, Y and Z. The mark center points are
3D coordinates in the calibration plate coordinate system and describe the 3D model of the calibration plate. The
calibration plate coordinate system is located in the middle of the surface of the calibration plate for calibration
plates with rectangularly arranged marks and at the center of the central mark of the first finder pattern for calibra-
tion plates with hexagonally arranged marks. Its z-axis points into the calibration plate, its x-axis to the right, and
its y-axis downwards.
The mark center points are typically used as input parameters for the operator camera_calibration.
Parameters

. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string


File name of the calibration plate description.
Default: ’calplate_320mm.cpd’
List of values: CalPlateDescr ∈ {’calplate_5mm.cpd’, ’calplate_10mm.cpd’, ’calplate_20mm.cpd’,
’calplate_40mm.cpd’, ’calplate_80mm.cpd’, ’calplate_160mm.cpd’, ’calplate_320mm.cpd’,
’calplate_640mm.cpd’, ’calplate_1200mm.cpd’, ’calplate_20mm_dark_on_light.cpd’,
’calplate_40mm_dark_on_light.cpd’, ’calplate_80mm_dark_on_light.cpd’, ’caltab_650um.descr’,
’caltab_2500um.descr’, ’caltab_6mm.descr’, ’caltab_10mm.descr’, ’caltab_30mm.descr’,
’caltab_100mm.descr’, ’caltab_200mm.descr’, ’caltab_800mm.descr’, ’caltab_small.descr’,
’caltab_big.descr’}
File extension: .cpd, .descr
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinates of the mark center points in the coordinate system of the calibration plate.

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 391

. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real


Y coordinates of the mark center points in the coordinate system of the calibration plate.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinates of the mark center points in the coordinate system of the calibration plate.
Example

* Read calibration image.


read_image(Image, 'calib/calib-3d-coord-03')
CalTabDescr := 'caltab_100mm.descr'
* Find calibration pattern.
find_caltab(Image, CalPlate1, CalTabDescr, 3, 112, 5)
* Find calibration marks and start poses.
StartCamPar := ['area_scan_division', 0.008, 0.0, 0.000011, 0.000011, \
384, 288, 768, 576]
find_marks_and_pose(Image,CalPlate1,CalTabDescr, StartCamPar, \
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1, \
StartPose1)
* Read 3D positions of calibration marks.
caltab_points(CalTabDescr, NX, NY, NZ)
* Calibrate camera.
camera_calibration(NX, NY, NZ, RCoord1, CCoord1, StartCamPar, \
StartPose1, 'all', CameraParam, FinalPose, Errors)
* Visualize calibration result.
dev_display(Image)
disp_caltab(WindowHandle, CalTabDescr, CameraParam, FinalPose, 1.0)

Result
caltab_points returns 2 (H_MSG_TRUE) if all parameter values are correct and the file CalPlateDescr
has been read successfully. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Successors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
project_3d_point, get_line_of_sight, gen_caltab
Module
Foundation

create_caltab ( : : NumRows, MarksPerRow, Diameter, FinderRow,


FinderColumn, Polarity, CalPlateDescr, CalPlatePSFile : )

Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with hexag-
onally arranged marks.
create_caltab creates the description file of a standard HALCON calibration plate with hexagonally arranged
marks. This calibration plate contains MarksPerRow times NumRows circular marks. These marks are arranged
in a hexagonal lattice such that each mark (except the ones at the border) has six equidistant neighbors.

HALCON 24.11.1.0
392 CHAPTER 6 CALIBRATION

A standard HALCON calibration plate with hexagonally arranged marks and its coordinate system.

The diameter of the marks is given by the parameter Diameter in meters. The distance between the centers of
horizontally neighboring calibration marks is given
√ by 2 · Diameter . The distance between neighboring rows of
calibration marks is given by 2 · Diameter · 0.75 . The width and the height of the generated calibration plate
can be calculated with the following equations:
 
Width = (2 MarksPerRow−1
2 + 3) · 2 · Diameter
 NumRows−1  √
Height = (2 2 · 3 + 5) · Diameter
The calibration plate contains one to five finder patterns. A finder pattern is a special mark hexagon (i.e. a mark and
its six neighbors) where either four or six marks contain a hole. Each of these up to five finder patterns is unique
such that it can be used to determine the orientation of the calibration plate and the position of the finder pattern
on the calibration plate. As a consequence, the calibration plate can only be found by find_calib_object
if at least one of these finder patterns is completely visible. The position of the central mark of each finder
pattern is given in FinderRow and FinderColumn. Thus, the length of the tuples given in FinderRow and
FinderColumn, respectively determine the number of finder patterns on the calibration plate. Be aware that two
finder patterns must not overlap. It is recommended to keep a certain distance between the finder patterns, so every
mark containing a hole can be assigned to a finder pattern distinctly. As a rule of thumb, if the calibration plate
contains too few marks to place all finder patterns in distinct positions, it is better to reduce the number of finder
patterns so that they can be distributed more evenly. An example case is depicted below, but note that a successful
detection of the patterns also depends on the used camera setup.
The coordinate system of the calibration plate is located in the center of the central mark of the first finder pattern.

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 393

The finder patterns on a calibration plate should not be too close to each other (left). If there are not enough marks
on your plate to distribute the finder patterns further apart you should reduce the number of finder patterns (right).

Depending on Polarity the marks are either light on dark background (for ’light_on_dark’, which is the default)
or dark on light background (for ’dark_on_light’).
The file CalPlateDescr contains the calibration plate description, and must be passed to all HALCON opera-
tions using the generated calibration plate (e.g., set_calib_data_calib_object or sim_caltab). The
default HALCON file extension for the description of a calibration plate with hexagonally arranged marks is ’cpd’.
A calibration plate description file contains information about:

• the number of row and columns of the calibration plate


• the number of marks per row and column
• the offset of the coordinate system to the plate’s surface in z-direction
• the rim of the calibration plate
• the polarity of the marks
• the number and position of finder patterns
• the x,y coordinates and radius of the calibration marks

A file generated by create_caltab looks like the following (comments are marked by a ’#’ at the beginning
of a line):

\# Plate Description Version 3


\# HALCON Version 20.11 -- Wed Dec 16 11:02:00 2020
\# Description of the standard calibration plate
\# used for the camera calibration in HALCON
\# (generated by create\_caltab)
\#
\#

\# 27 rows x 31 columns
\# Width, height of calibration plate [meter]: 0.170323, 0.129118
\# Distance between mark centers [meter]: 0.0051613

\# Number of marks in y-dimension (rows)


r 27

\# Number of marks in x-dimension (columns)


c 31

\# offset of coordinate system in z-dimension [meter] (optional):


z 0

\# rim of the calibration plate (min x, max y, max x, min y) [meter]:

HALCON 24.11.1.0
394 CHAPTER 6 CALIBRATION

o -0.083871125 0.0645592449151841 0.086451775 -0.0645592449151841

\# polarity of the marks (light or dark):


p light

\# number of finder pattern marks:


f 5

\# position of the finder patterns (central mark): x y [index]


15 13
6 6
24 6
6 20
24 20

\# calibration marks: x y radius [meter]

\# calibration marks at y = -0.0581076 m


-0.07483885 -0.0581076199151841 0.001290325
-0.06967755 -0.0581076199151841 0.001290325
-0.06451625 -0.0581076199151841 0.001290325
-0.05935495 -0.0581076199151841 0.001290325
-0.05419365 -0.0581076199151841 0.001290325
-0.04903235 -0.0581076199151841 0.001290325
-0.04387105 -0.0581076199151841 0.001290325
-0.03870975 -0.0581076199151841 0.001290325
-0.03354845 -0.0581076199151841 0.001290325
-0.02838715 -0.0581076199151841 0.001290325
-0.02322585 -0.0581076199151841 0.001290325
-0.01806455 -0.0581076199151841 0.001290325
-0.01290325 -0.0581076199151841 0.001290325
-0.00774195 -0.0581076199151841 0.001290325
-0.00258065 -0.0581076199151841 0.001290325
0.00258065 -0.0581076199151841 0.001290325
0.00774195 -0.0581076199151841 0.001290325
0.01290325 -0.0581076199151841 0.001290325
0.01806455 -0.0581076199151841 0.001290325
0.02322585 -0.0581076199151841 0.001290325
0.02838715 -0.0581076199151841 0.001290325
0.03354845 -0.0581076199151841 0.001290325
0.03870975 -0.0581076199151841 0.001290325
0.04387105 -0.0581076199151841 0.001290325
0.04903235 -0.0581076199151841 0.001290325
0.05419365 -0.0581076199151841 0.001290325
0.05935495 -0.0581076199151841 0.001290325
0.06451625 -0.0581076199151841 0.001290325
0.06967755 -0.0581076199151841 0.001290325
0.07483885 -0.0581076199151841 0.001290325
0.08000015 -0.0581076199151841 0.001290325

\# calibration marks at y = -0.0536378 m


-0.0774195 -0.0536378029986315 0.001290325
-0.0722582 -0.0536378029986315 0.001290325
-0.0670969 -0.0536378029986315 0.001290325
-0.0619356 -0.0536378029986315 0.001290325
-0.0567743 -0.0536378029986315 0.001290325
-0.051613 -0.0536378029986315 0.001290325
-0.0464517 -0.0536378029986315 0.001290325
-0.0412904 -0.0536378029986315 0.001290325

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 395

-0.0361291 -0.0536378029986315 0.001290325


-0.0309678 -0.0536378029986315 0.001290325
-0.0258065 -0.0536378029986315 0.001290325
-0.0206452 -0.0536378029986315 0.001290325
-0.0154839 -0.0536378029986315 0.001290325
-0.0103226 -0.0536378029986315 0.001290325
-0.0051613 -0.0536378029986315 0.001290325
0 -0.0536378029986315 0.001290325
0.0051613 -0.0536378029986315 0.001290325
0.0103226 -0.0536378029986315 0.001290325
0.0154839 -0.0536378029986315 0.001290325
0.0206452 -0.0536378029986315 0.001290325
0.0258065 -0.0536378029986315 0.001290325
0.0309678 -0.0536378029986315 0.001290325
0.0361291 -0.0536378029986315 0.001290325
0.0412904 -0.0536378029986315 0.001290325
0.0464517 -0.0536378029986315 0.001290325
0.051613 -0.0536378029986315 0.001290325
0.0567743 -0.0536378029986315 0.001290325
0.0619356 -0.0536378029986315 0.001290325
0.0670969 -0.0536378029986315 0.001290325
0.0722582 -0.0536378029986315 0.001290325
0.0774195 -0.0536378029986315 0.001290325

\# calibration marks at y = -0.049168 m


...

\# calibration marks at y = -0.0446982 m


...

\# calibration marks at y = -0.0402284 m


...

\# calibration marks at y = -0.0357585 m


...

\# calibration marks at y = -0.0312887 m


...

\# calibration marks at y = -0.0268189 m


...

\# calibration marks at y = -0.0223491 m


...

\# calibration marks at y = -0.0178793 m


...

\# calibration marks at y = -0.0134095 m


...

\# calibration marks at y = -0.00893963 m


...

\# calibration marks at y = -0.00446982 m


...

\# calibration marks at y = 0 m
...

HALCON 24.11.1.0
396 CHAPTER 6 CALIBRATION

\# calibration marks at y = 0.00446982 m


...

\# calibration marks at y = 0.00893963 m


...

\# calibration marks at y = 0.0134095 m


...

\# calibration marks at y = 0.0178793 m


...

\# calibration marks at y = 0.0223491 m


...

\# calibration marks at y = 0.0268189 m


...

\# calibration marks at y = 0.0312887 m


...

\# calibration marks at y = 0.0357585 m


...

\# calibration marks at y = 0.0402284 m


...

\# calibration marks at y = 0.0446982 m


...

\# calibration marks at y = 0.049168 m


...

\# calibration marks at y = 0.0536378 m


...

\# calibration marks at y = 0.0581076 m


...

Note that only the coordinates and radius of the marks in the first two rows are listed completely. The corresponding
coordinates and radius of the marks in the other rows are omitted for a better overview.
The file CalPlatePSFile contains the corresponding PostScript description of the calibration plate, which can
be used to print the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), a printed calibration plate may not
match the values in the calibration plate description file CalPlateDescr exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
For purchased calibration plates it is recommended to use the specific calibration description file that is supplied
with your calibration plate.
Parameters
. NumRows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of rows.
Default: 27
Recommended increment: 1
Restriction: NumRows > 2

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 397

. MarksPerRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer


Number of marks per row.
Default: 31
Recommended increment: 1
Restriction: MarksPerRow > 2
. Diameter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Diameter of the marks.
Default: 0.00258065
Suggested values: Diameter ∈ {0.00258065, 0.1, 0.0125, 0.00375, 0.00125}
. FinderRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Row indices of the finder patterns.
Default: [13,6,6,20,20]
. FinderColumn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Column indices of the finder patterns.
Default: [15,6,24,6,24]
. Polarity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Polarity of the marks
Default: ’light_on_dark’
Suggested values: Polarity ∈ {’light_on_dark’, ’dark_on_light’}
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the calibration plate description.
Default: ’calplate.cpd’
List of values: CalPlateDescr ∈ {’calplate.cpd’}
File extension: .cpd
. CalPlatePSFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the PostScript file.
Default: ’calplate.ps’
File extension: .ps
Example

* Parameters to create the descriptor for the 160mm wide calibration


* plate.
create_caltab (27, 31, 0.00258065, [13, 6, 6, 20, 20], [15, 6, 24, 6, 24], \
'light_on_dark', 'calplate.cpd', 'caltab.ps')

Result
create_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written
successfully. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Successors
read_cam_par, caltab_points
Alternatives
gen_caltab
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab
Module
Foundation

HALCON 24.11.1.0
398 CHAPTER 6 CALIBRATION

disp_caltab ( : : WindowHandle, CalPlateDescr, CameraParam,


CalPlatePose, ScaleFac : )

Project and visualize the 3D model of the calibration plate in the image.
disp_caltab is used to visualize the calibration marks and the connecting lines between the marks of the used
calibration plate (CalPlateDescr) in the window specified by WindowHandle. Additionally, the x- and
y-axes of the plate’s coordinate system are printed on the plate’s surface. For this, the 3D model of the calibra-
tion plate is projected into the image plane using the internal (CameraParam) and external camera parameters
(CalPlatePose). Thereby the pose is in the form ccs Pwcs , where ccs denotes the camera coordinate sys-
tem and wcs the world coordinate system (see Transformations / Poses and “Solution Guide III-C - 3D
Vision”), thus the pose of the calibration plate in camera coordinates. The underlying camera model is described
in Calibration.
Typically, disp_caltab is used to verify the result of the camera calibration (see Calibration or
camera_calibration) by superimposing it onto the original image. The current line width can be set by
set_line_width, the current color can be set by set_color. Additionally, the font type of the labels of the
coordinate axes can be set by set_font.
The parameter ScaleFac influences the number of supporting points to approximate the elliptic contours of the
calibration marks. You should increase the number of supporting points, if the image part in the output window is
displayed with magnification (see set_part).
Parameters
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; handle
Window in which the calibration plate should be visualized.
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default: ’calplate_320.cpd’
List of values: CalPlateDescr ∈ {’calplate_5mm.cpd’, ’calplate_10mm.cpd’, ’calplate_20mm.cpd’,
’calplate_40mm.cpd’, ’calplate_80mm.cpd’, ’calplate_160mm.cpd’, ’calplate_320mm.cpd’,
’calplate_640mm.cpd’, ’calplate_1200mm.cpd’, ’calplate_20mm_dark_on_light.cpd’,
’calplate_40mm_dark_on_light.cpd’, ’calplate_80mm_dark_on_light.cpd’, ’caltab_650um.descr’,
’caltab_2500um.descr’, ’caltab_6mm.descr’, ’caltab_10mm.descr’, ’caltab_30mm.descr’,
’caltab_100mm.descr’, ’caltab_200mm.descr’, ’caltab_800mm.descr’, ’caltab_small.descr’,
’caltab_big.descr’}
File extension: .cpd, .descr
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CalPlatePose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
External camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements: 7
. ScaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scaling factor for the visualization.
Default: 1.0
Suggested values: ScaleFac ∈ {0.5, 1.0, 2.0, 3.0}
Recommended increment: 0.05
Restriction: 0.0 < ScaleFac
Example

* Read image of calibration plate.


read_image (Image, 'calib/calib_single_camera_01')
get_image_size (Image, Width, Height)
* Create and setup the calibration model.
create_calib_data ('calibration_object', 1, 1, CalibDataID)
CalPlateDescr := 'calplate_80mm.cpd'
set_calib_data_calib_object (CalibDataID, 0, CalPlateDescr)
CamParam := ['area_scan_division', 0.008, -1500, 3.7e-6, 3.7e-6, \
640, 470, 1292, 964]
set_calib_data_cam_param (CalibDataID, 0, [], CamParam)

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 399

* Localize calibration plate in the image.


find_calib_object (Image, CalibDataID, 0, 0, 0, [], [])
get_calib_data_observ_pose (CalibDataID, 0, 0, 0, StartPose)
* Display calibration plate.
disp_caltab (WindowHandle, CalPlateDescr, CamParam, StartPose, 1)

Result
disp_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
camera_calibration, read_cam_par, read_pose
See also
find_marks_and_pose, camera_calibration, sim_caltab, write_cam_par, read_cam_par,
create_pose, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation

find_calib_object ( Image : : CalibDataID, CameraIdx, CalibObjIdx,


CalibObjPoseIdx, GenParamName, GenParamValue : )

Find the HALCON calibration plate and set the extracted points and contours in a calibration data model.
find_calib_object searches in Image for a HALCON calibration plate corresponding to the description
of the calibration object with the index CalibObjIdx from the calibration data model CalibDataID. If a
calibration plate is found, find_calib_object extracts the centers and the contours of its marks and estimates
the pose of the plate relative to the observing camera CameraIdx. All collected observation data is stored in
the calibration data model for the calibration object pose CalibObjPoseIdx. In order to ensure a successful
detection of the calibration plate, at least one finder pattern has to be visible in the image. For calibration plates
with hexagonally arranged marks this is a special mark hexagon where either four or six marks contain a hole,
while for calibration plates with rectangularly arranged marks this is the border of the calibration plate with a
triangle in one corner.
Preparation of the input data
Before the operator find_calib_object can be called, a calibration data model has to be defined performing
the following steps:

1. Create a calibration data model with the operator create_calib_data, specifying the number of
cameras in the setup and the number of used calibration objects.
2. Specify the camera type and the initial internal camera parameters for all cameras with the operator
set_calib_data_cam_param. Note that only cameras of the same type can be calibrated in a single
setup.
3. Specify the description of all calibration objects with the operator
set_calib_data_calib_object. Note that for a successful call of find_calib_object a
valid description file of the calibration plate is necessary. This description file has to be set beforehand
via the operator set_calib_data_calib_object. As a consequence, the usage of a user-defined
calibration object can only be made by the operator set_calib_data_observ_points.

Collecting observation data


find_calib_object is used to collect observations in a calibration data model. Beyond, it stores additional
observation data that cannot be added to the model with set_calib_data_observ_points and that is
dependent on the used calibration plate. While for calibration plates with rectangularly arranged marks (see

HALCON 24.11.1.0
400 CHAPTER 6 CALIBRATION

gen_caltab) the rim of the calibration plate is added to the observations, calibration plates with hexagonal
pattern (see create_caltab) store one of their finder pattern. Additionally and irrespective of the used calibra-
tion plate, the contour of each mark is added to the calibration model.
Setting additional parameters
Using calibration plates with hexagonally arranged marks, the following additional parameter can be set via
GenParamName and GenParamValue:

’sigma’: Smoothing factor for the extraction of the mark contours. For increasing values of ’sigma’, the filter
width and thereby the amount of smoothing increases (see also edges_sub_pix for the influence of the
filter width on the Canny filter).
Suggested values: 0.5, 0.7, 0.9, 1.0, 1.2, 1.5
Default: 1.0

For calibration plates with rectangularly arranged marks, find_calib_object essentially en-
capsulates the sequence of three operator calls: find_caltab, find_marks_and_pose and
set_calib_data_observ_points. For this kind of calibration plates the following parameters can be
set using GenParamName and GenParamValue:

’alpha’: Smoothing factor for the extraction of the mark contours. For increasing values of ’alpha’, the filter width
and thereby the amount of smoothing decreases (see also edges_sub_pix for the influence of the filter
width on the Lanser2 filter ).
Suggested values: 0.5, 0.7, 0.9, 1.0, 1.2, 1.5
Default: 0.9
’gap_tolerance’: Tolerance factor for gaps between the marks. If the marks appear closer to each other than
expected, you might set ’gap_tolerance’ < 1.0 to avoid disturbing patterns outside the calibration plate to be
associated with the calibration plate. This can typically happen if the plate is strongly tilted and positioned
in front of a background that exposes mark-like patterns. If the distances between single marks vary in a
wide range, e.g., if the calibration plate appears with strong perspective distortion in the image, you might
set ’gap_tolerance’ > 1.0 to enforce the marks grouping (see also find_caltab).
Suggested values: 0.75, 0.9, 1.0, 1.1, 1.2, 1.5
Default: 1.0
’max_diam_marks’: Maximum expected diameter of the marks (needed internally by
find_marks_and_pose). By default, this value is estimated by the preceding internal call to
find_caltab. However, if the estimation is erroneous for no obvious reason or the internal call to
find_caltab fails or is simply skipped (see ’skip_find_caltab’ below), you might have to adjust this
value.
Suggested values: 50.0, 100.0, 150.0, 200.0, 300.0
’skip_find_caltab’: Skip the internal call to find_caltab. If activated, only the domain of Image reduces the
search area for the internal call of find_marks_and_pose. Thus, a user defined calibration plate region
can be incorporated by setting ’skip_find_caltab’=’false’ and reducing the Image domain to the user region.
List of values: ’false’, ’true’
Default: ’false’

If using a HALCON calibration plate as calibration object, it is recommended to use find_calib_object


instead of set_calib_data_observ_points where possible, since the contour information, which it stores
in the calibration data model, enables a more precise calibration procedure with calibrate_cameras.
After a successful call to find_calib_object, the extracted points can be queried
by get_calib_data_observ_points and the extracted contours can be accessed by
get_calib_data_observ_contours.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image.
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 401

. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer


Index of the observing camera.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2}
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the calibration object.
Default: 0
Suggested values: CalibObjIdx ∈ {0, 1, 2}
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object.
Default: 0
Suggested values: CalibObjPoseIdx ∈ {0, 1, 2}
Restriction: CalibObjPoseIdx >= 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters to be set.
Default: []
List of values: GenParamName ∈ {’gap_tolerance’, ’alpha’, ’sigma’, ’max_diam_marks’,
’skip_find_caltab’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters to be set.
Default: []
Suggested values: GenParamValue ∈ {0.5, 0.9, 1.0, 1.2, 1.5, 2.0, ’true’, ’false’}
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator modifies the state of the following input parameter:
• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
read_image, find_marks_and_pose, set_calib_data_cam_param,
set_calib_data_calib_object
Possible Successors
set_calib_data, calibrate_cameras
Alternatives
find_caltab, find_marks_and_pose, set_calib_data_observ_points
Module
Calibration

find_caltab ( Image : CalPlate : CalPlateDescr, SizeGauss,


MarkThresh, MinDiamMarks : )

Segment the region of a standard calibration plate with rectangularly arranged marks in the image.
find_caltab is used to determine the region of a plane calibration plate with circular marks in the input image
Image. The region must correspond to a standard calibration plate with rectangularly arranged marks described in
the file CalPlateDescr. The successfully segmented region is returned in CalPlate. The operator provides
two algorithms. By setting appropriate integer values in SizeGauss, MarkThresh, and MinDiamMarks,
respectively, you invoke the standard algorithm. If you pass a tuple of parameter names in SizeGauss and a
corresponding tuple of parameter values in MarkThresh, or just two empty tuples, respectively, you invoke the
advanced algorithm instead. In this case the value passed in MinDiamMarks is ignored.

HALCON 24.11.1.0
402 CHAPTER 6 CALIBRATION

Standard algorithm
First, the input image is smoothed (see gauss_image); the size of the used filter mask is given by SizeGauss.
Afterwards, a threshold operator (see threshold) with a minimum gray value MarkThresh is applied. Among
the extracted connected regions the most convex region with an almost correct number of holes (corresponding to
the dark marks of the calibration plate) is selected. Holes with a diameter smaller than the expected size of the
marks MinDiamMarks are eliminated to reduce the impact of noise. The number of marks is read from the
calibration plate description file CalPlateDescr. The complete explanation of this file can be found within the
description of gen_caltab.
Advanced algorithm
First, an image pyramid based on Image is built. Starting from the highest pyramid level, round regions are
segmented with a dynamic threshold. Then, they are associated in groups based on their mutual proximity and it
is evaluated whether they can represent marks of a potential calibration plate. The search is terminated once the
expected number of marks has been identified in one group. The surrounding lighter area is returned in CalPlate.
The image pyramid makes the search independent from the size of the image and the marks. The dynamic threshold
makes the algorithm immune to bad or irregular illumination. Therefore, in general, no parameter is required. Yet,
you can adjust some auxiliary parameters of the advanced algorithm by passing a list of parameter names (strings)
to SizeGauss and a list of corresponding parameter values to MarkThresh. Currently the following parameter
is supported:

’gap_tolerance’: Tolerance factor for gaps between the marks. If the marks appear closer to each other than
expected, you might set ’gap_tolerance’ < 1.0 to avoid disturbing patterns outside the calibration plate to be
associated with the calibration plate. This can typically happen if the plate is strongly tilted and positioned
in front of a background that exposes mark-like patterns. If the distances between single marks deviate
significantly, e.g., if the calibration plate appears with strong perspective distortion in the image, you might
set ’gap_tolerance’ > 1.0 to enforce the grouping for the more distant marks.
Suggested values: 0.75, 0.9, 1.0, 1.1, 1.2, 1.5
Default: 1.0

Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2
Input image.
. CalPlate (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; object
Output region.
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default: ’caltab_100.descr’
List of values: CalPlateDescr ∈ {’caltab_650um.descr’, ’caltab_2500um.descr’, ’caltab_6mm.descr’,
’caltab_10mm.descr’, ’caltab_30mm.descr’, ’caltab_100mm.descr’, ’caltab_200mm.descr’,
’caltab_800mm.descr’, ’caltab_small.descr’, ’caltab_big.descr’}
File extension: .descr
. SizeGauss (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Filter size of the Gaussian.
Default: 3
List of values: SizeGauss ∈ {0, 3, 5, 7, 9, 11, ’gap_tolerance’}
. MarkThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / real
Threshold value for mark extraction.
Default: 112
Suggested values: MarkThresh ∈ {48, 64, 80, 96, 112, 128, 144, 160, 0.5, 0.9, 1.0, 1.1, 1.5}
. MinDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Expected minimal diameter of the marks on the calibration plate.
Default: 5
Suggested values: MinDiamMarks ∈ {3, 5, 9, 15, 30, 50, 70}
Example

* Read calibration image.


read_image(Image, 'calib/calib_distorted_01')

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 403

* Find calibration pattern.


find_caltab(Image, CalPlate, 'caltab_100mm.descr', 3, 112, 5)

Result
find_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and an image region is
found. The behavior in case of empty input (no image given) can be set via set_system(::
’no_object_result’,<Result>:) and the behavior in case of an empty result region via set_system
(::’store_empty_region’,<’true’/’false’>:). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Predecessors
read_image
Possible Successors
find_marks_and_pose
See also
find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab, caltab_points,
gen_caltab
Module
Foundation

find_marks_and_pose ( Image, CalPlateRegion : : CalPlateDescr,


StartCamParam, StartThresh, DeltaThresh, MinThresh, Alpha,
MinContLength, MaxDiamMarks : RCoord, CCoord, StartPose )

Extract rectangularly arranged 2D calibration marks from the image and calculate initial values for the external
camera parameters.
find_marks_and_pose is used to determine the input data for a subsequent camera calibration using a calibra-
tion plate with rectangularly arranged marks (see Calibration or camera_calibration): First, the 2D center
points [RCoord,CCoord] of the calibration marks within the region CalPlateRegion of the input image
Image are extracted and ordered. Secondly, a rough estimate for the external camera parameters (StartPose)
is computed, i.e., the 3D pose (= position and orientation) of the calibration plate relative to the camera coordinate
system (see create_pose for more information about 3D poses).
In the input image Image an edge detector is applied (see edges_image, mode ’lanser2’) to the region
CalPlateRegion, which can be found by applying the operator find_caltab. The filter parameter for
this edge detection can be tuned via Alpha. Use a smaller value for Alpha to achieve a stronger smoothing
effect. In the edge image closed contours are searched for: The number of closed contours must correspond to
the number of calibration marks as described in the calibration plate description file CalPlateDescr and the
contours have to be elliptically shaped. Contours shorter than MinContLength are discarded, just as contours
enclosing regions with a diameter larger than MaxDiamMarks (e.g., the border of the calibration plate).
For the detection of contours a threshold operator is applied on the resulting amplitudes of the edge detector. All
points with a high amplitude (i.e., borders of marks) are selected.
First, the threshold value is set to StartThresh. If the search for the closed contours or the successive pose
estimate fails, this threshold value is successively decreased by DeltaThresh down to a minimum value of
MinThresh.
Each of the found contours is refined with subpixel accuracy (see edges_sub_pix) and subsequently approxi-
mated by an ellipse. The center points of these ellipses represent a good approximation of the desired 2D image
coordinates [RCoord,CCoord] of the calibration mark center points. The order of the values within these two tu-
ples must correspond to the order of the 3D coordinates of the calibration marks in the calibration plate description
file CalPlateDescr, since this fixes the correspondences between extracted image marks and known model
marks (given by caltab_points)! If a triangular orientation mark is defined in a corner of the plate by the

HALCON 24.11.1.0
404 CHAPTER 6 CALIBRATION

plate description file (see gen_caltab), the mark will be detected and the point order is returned in row-major
order beginning with the corner mark in the (barycentric) negative quadrant with respect to the defined coordinate
system of the plate. Else, if no orientation mark is defined, the order of the center points is in row-major order
beginning at the upper left corner mark in the image.
Based on the ellipse parameters for each calibration mark, a rough estimate for the external camera parameters is
finally computed. For this purpose the fixed correspondences between extracted image marks and known model
marks are used. The estimate StartPose describes the pose of the calibration plate in the camera coordinate
system as required by the operator camera_calibration.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image.
. CalPlateRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; object
Region of the calibration plate.
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default: ’caltab_100.descr’
List of values: CalPlateDescr ∈ {’caltab_650um.descr’, ’caltab_2500um.descr’, ’caltab_6mm.descr’,
’caltab_10mm.descr’, ’caltab_30mm.descr’, ’caltab_100mm.descr’, ’caltab_200mm.descr’,
’caltab_800mm.descr’, ’caltab_small.descr’, ’caltab_big.descr’}
File extension: .descr
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal camera parameters.
. StartThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Initial threshold value for contour detection.
Default: 128
Suggested values: StartThresh ∈ {80, 96, 112, 128, 144, 160}
Restriction: StartThresh > 0
. DeltaThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Loop value for successive reduction of StartThresh.
Default: 10
Suggested values: DeltaThresh ∈ {6, 8, 10, 12, 14, 16, 18, 20, 22}
Restriction: DeltaThresh > 0
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Minimum threshold for contour detection.
Default: 18
Suggested values: MinThresh ∈ {8, 10, 12, 14, 16, 18, 20, 22}
Restriction: MinThresh > 0
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Filter parameter for contour detection, see edges_image.
Default: 0.9
Suggested values: Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1}
Value range: 0.2 ≤ Alpha ≤ 2.0
Restriction: Alpha > 0.0
. MinContLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Minimum length of the contours of the marks.
Default: 15.0
Suggested values: MinContLength ∈ {10.0, 15.0, 20.0, 30.0, 40.0, 100.0}
Restriction: MinContLength > 0.0
. MaxDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum expected diameter of the marks.
Default: 100.0
Suggested values: MaxDiamMarks ∈ {50.0, 100.0, 150.0, 200.0, 300.0}
Restriction: MaxDiamMarks > 0.0
. RCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Tuple with row coordinates of the detected marks.
. CCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Tuple with column coordinates of the detected marks.

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 405

. StartPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer


Estimation for the external camera parameters.
Number of elements: 7
Example

* Read calibration image.


read_image(Image, 'calib/calib_distorted_01')
* Find calibration pattern.
find_caltab(Image,CalPlate, 'caltab_100mm.descr', 3, 112, 5)
* Find calibration marks and start pose.
find_marks_and_pose(Image, CalPlate, 'caltab_100mm.descr' , \
['area_scan_division', 0.008, 0.0, \
0.000011, 0.000011, 384, 288, 640, 512], \
128, 10, 18, 0.9, 15.0, 100.0, RCoord, CCoord, StartPose)

Result
find_marks_and_pose returns 2 (H_MSG_TRUE) if all parameter values are correct and an estimation for
the external camera parameters has been determined successfully. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
find_caltab
Possible Successors
camera_calibration
See also
find_caltab, camera_calibration, disp_caltab, sim_caltab, read_cam_par, read_pose,
create_pose, pose_to_hom_mat3d, caltab_points, gen_caltab, edges_sub_pix,
edges_image
Module
Foundation

gen_caltab ( : : XNum, YNum, MarkDist, DiameterRatio,


CalPlateDescr, CalPlatePSFile : )

Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with rectan-
gularly arranged marks.
gen_caltab generates the description of a standard HALCON calibration plate with rectangularly arranged
marks. This calibration plate consists of XNum times YNum black circular marks on a white plane which are
surrounded by a black frame.

A standard HALCON calibration plate with rectangularly arranged marks

HALCON 24.11.1.0
406 CHAPTER 6 CALIBRATION

The marks are arranged in a rectangular grid with YNum and XNum equidistant rows and columns. The distances
between these rows and columns defines the parameter MarkDist in meter. The marks’ diameter can be set by the
parameter DiameterRatio and is defined by the equation Diameter = MarkDist · DiameterRatio . Using a
distance between marks of 0.01 m and a diameter ratio of 0.5, the width of the dark surrounding frame becomes 8
cm, and the radius of the marks is set to 2.5 mm. The coordinate system of the calibration plate is located in the
barycenter of all marks, its z-axis points into the calibration plate, its x-axis to the right, and its y-axis downwards.
The black frame of the calibration plate encloses a triangular black orientation mark in the top left corner to
uniquely determine the position of the calibration plate. The width and the height of the generated calibration plate
can be calculated with the following equations:
Width = MarkDist · (XNum + 1)
Height = MarkDist · (YNum + 1)
The file CalPlateDescr contains the calibration plate description, e.g., the number of rows and columns of the
calibration plate, the geometry of the surrounding frame (see find_caltab), the triangular orientation mark, an
offset of the coordinate system to the plate’s surface in z-direction, and the x,y coordinates and the radius of all
calibration plate marks given in the calibration plate coordinate system. The definition of the orientation and the
offset, indicated by t and z, is optional and can be commented out. The default HALCON file extension for the
calibration plate description is ’descr’. A file generated by gen_caltab looks like the following (comments are
marked by a ’#’ at the beginning of a line):

\# Plate Description Version 2


\# HALCON Version 7.1 -- Fri Jun 24 16:41:00 2005
\# Description of the standard calibration plate
\# used for the camera calibration in HALCON
\# (generated by gen\_caltab)
\#
\#
\# 7 rows x 7 columns
\# Width, height of the black frame [meter]: 0.1, 0.1
\# Distance between mark centers [meter]: 0.0125

\# Number of marks in y-dimension (rows)


r 7

\# Number of marks in x-dimension (columns)


c 7

\# offset of coordinate system in z-dimension [meter] (optional):


z 0

\# Rectangular border (rim and black frame) of calibration plate


\# rim of the calibration plate (min x, max y, max x, min y) [meter]:
o -0.05125 0.05125 0.05125 -0.05125
\# outer border of the black frame (min x, max y, max x, min y) [meter]:

i -0.05 0.05 0.05 -0.05


\# triangular corner mark given by two corner points (x,y, x,y) [meter]
\# (optional):
t -0.05 -0.0375 -0.0375 -0.05

\# width of the black frame [meter]:


w 0.003125

\# calibration marks: x y radius [meter]

\# calibration marks at y = -0.0375 m


-0.0375 -0.0375 0.003125
-0.025 -0.0375 0.003125
-0.0125 -0.0375 0.003125

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 407

-3.46945e-018 -0.0375 0.003125


0.0125 -0.0375 0.003125
0.025 -0.0375 0.003125
0.0375 -0.0375 0.003125

\# calibration marks at y = -0.025 m


-0.0375 -0.025 0.003125
-0.025 -0.025 0.003125
-0.0125 -0.025 0.003125
-3.46945e-018 -0.025 0.003125
0.0125 -0.025 0.003125
0.025 -0.025 0.003125
0.0375 -0.025 0.003125

\# calibration marks at y = -0.0125 m


-0.0375 -0.0125 0.003125
-0.025 -0.0125 0.003125
-0.0125 -0.0125 0.003125
-3.46945e-018 -0.0125 0.003125
0.0125 -0.0125 0.003125
0.025 -0.0125 0.003125
0.0375 -0.0125 0.003125

\# calibration marks at y = -3.46945e-018 m


-0.0375 -3.46945e-018 0.003125
-0.025 -3.46945e-018 0.003125
-0.0125 -3.46945e-018 0.003125
-3.46945e-018 -3.46945e-018 0.003125
0.0125 -3.46945e-018 0.003125
0.025 -3.46945e-018 0.003125
0.0375 -3.46945e-018 0.003125

\# calibration marks at y = 0.0125 m


-0.0375 0.0125 0.003125
-0.025 0.0125 0.003125
-0.0125 0.0125 0.003125
-3.46945e-018 0.0125 0.003125
0.0125 0.0125 0.003125
0.025 0.0125 0.003125
0.0375 0.0125 0.003125

\# calibration marks at y = 0.025 m


-0.0375 0.025 0.003125
-0.025 0.025 0.003125
-0.0125 0.025 0.003125
-3.46945e-018 0.025 0.003125
0.0125 0.025 0.003125
0.025 0.025 0.003125
0.0375 0.025 0.003125

\# calibration marks at y = 0.0375 m


-0.0375 0.0375 0.003125
-0.025 0.0375 0.003125
-0.0125 0.0375 0.003125
-3.46945e-018 0.0375 0.003125
0.0125 0.0375 0.003125
0.025 0.0375 0.003125
0.0375 0.0375 0.003125

HALCON 24.11.1.0
408 CHAPTER 6 CALIBRATION

The file CalPlatePSFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate description file CalPlateDescr exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameters
. XNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of marks in x direction.
Default: 7
Suggested values: XNum ∈ {5, 7, 9}
Recommended increment: 1
Restriction: XNum > 1
. YNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of marks in y direction.
Default: 7
Suggested values: YNum ∈ {5, 7, 9}
Recommended increment: 1
Restriction: YNum > 1
. MarkDist (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Distance of the marks in meters.
Default: 0.0125
Suggested values: MarkDist ∈ {0.1, 0.0125, 0.00375, 0.00125}
Restriction: 0.0 < MarkDist
. DiameterRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Ratio of the mark diameter to the mark distance.
Default: 0.5
Suggested values: DiameterRatio ∈ {0.5, 0.55, 0.6, 0.65}
Restriction: 0.0 < DiameterRatio < 1.0
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the calibration plate description.
Default: ’caltab.descr’
List of values: CalPlateDescr ∈ {’caltab.descr’}
File extension: .descr
. CalPlatePSFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the PostScript file.
Default: ’caltab.ps’
File extension: .ps
Example

* Create calibration plate with width = 80 cm.


gen_caltab( 7, 7, 0.1, 0.5, 'caltab.descr', 'caltab.ps')

Result
gen_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written
successfully. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Successors
read_cam_par, caltab_points
Alternatives
create_caltab

HALCON/HDevelop Reference Manual, 2024-11-13


6.2. CALIBRATION OBJECT 409

See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab
Module
Foundation

sim_caltab ( : SimImage : CalPlateDescr, CameraParam, CalPlatePose,


GrayBackground, GrayPlate, GrayMarks, ScaleFac : )

Simulate an image with calibration plate.


sim_caltab is used to generate a simulated calibration image. The calibration plate description is read
from the file CalPlateDescr and will be projected into the image plane using the given camera parame-
ters, thus internal camera parameters CameraParam and external camera parameters CalPlatePose (see also
project_3d_point). Thereby the pose is expected to be in the form ccs Pwcs , where ccs denotes the camera
coordinate system and wcs the world coordinate system (see Transformations / Poses and “Solution Guide
III-C - 3D Vision”).
In the simulated image only the calibration plate is shown. The image background is set to the gray value
GrayBackground, the calibration plate background is set to GrayPlate, and the calibration marks are set
to the gray value GrayMarks. The parameter ScaleFac influences the number of supporting points to approxi-
mate the elliptic contours of the calibration marks, see also disp_caltab. Increasing the number of supporting
points causes a more accurate determination of the mark boundary, but increases the computation time, too. For
each pixel of the simulated image which touches a subpixel-boundary of this kind, the gray value is set linearly
between GrayMarks and GrayPlate dependent on the proportion Inside/Outside.
By applying the operator sim_caltab you can generate synthetic calibration images (with known camera pa-
rameters!) to test the quality of the calibration algorithm (see Calibration).
Parameters
. SimImage (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : byte
Simulated calibration image.
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default: ’calplate_320mm.cpd’
List of values: CalPlateDescr ∈ {’calplate_5mm.cpd’, ’calplate_10mm.cpd’, ’calplate_20mm.cpd’,
’calplate_40mm.cpd’, ’calplate_80mm.cpd’, ’calplate_160mm.cpd’, ’calplate_320mm.cpd’,
’calplate_640mm.cpd’, ’calplate_1200mm.cpd’, ’calplate_20mm_dark_on_light.cpd’,
’calplate_40mm_dark_on_light.cpd’, ’calplate_80mm_dark_on_light.cpd’, ’caltab_650um.descr’,
’caltab_2500um.descr’, ’caltab_6mm.descr’, ’caltab_10mm.descr’, ’caltab_30mm.descr’,
’caltab_100mm.descr’, ’caltab_200mm.descr’, ’caltab_800mm.descr’, ’caltab_small.descr’,
’caltab_big.descr’}
File extension: .cpd, .descr
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CalPlatePose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
External camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements: 7
. GrayBackground (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Gray value of image background.
Default: 128
Suggested values: GrayBackground ∈ {0, 32, 64, 96, 128, 160}
Restriction: 0 <= GrayBackground <= 255
. GrayPlate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Gray value of calibration plate.
Default: 80
Suggested values: GrayPlate ∈ {144, 160, 176, 192, 208, 224, 240}
Restriction: 0 <= GrayPlate <= 255

HALCON 24.11.1.0
410 CHAPTER 6 CALIBRATION

. GrayMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer


Gray value of calibration marks.
Default: 224
Suggested values: GrayMarks ∈ {16, 32, 48, 64, 80, 96, 112}
Restriction: 0 <= GrayMarks <= 255
. ScaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scaling factor to reduce oversampling.
Default: 1.0
Suggested values: ScaleFac ∈ {1.0, 0.5, 0.25, 0.125}
Recommended increment: 0.05
Restriction: 1.0 >= ScaleFac
Example

* Read calibration image.


read_image(Image1, 'calib-01')
* Find calibration pattern.
CameraType := 'area_scan_division'
StartCamPar := [CameraType, Focus, Kappa, Sx, Sy, Cx, Cy, \
ImageWidth, ImageHeight]
create_calib_data ('calibration_object', 1, 1, CalibDataID)
set_calib_data_cam_param (CalibDataID, 0, [], StartCamPar)
set_calib_data_calib_object (CalibDataID, 0, 'calplate.cpd')
find_caltab(Image1, CalPlate1, 'caltab.descr', 3, 112, 5)
* Find calibration marks and initial pose.
find_calib_object (Image1, CalibDataID, 0, 0, 0, [], [])
* Camera calibration.
calibrate_cameras (CalibDataID, Error)
* Simulate calibration image.
get_calib_data (CalibDataID, 'calib_obj_pose', [0, 0], 'pose', FinalPose)
get_calib_data (CalibDataID, 'camera', 0, 'params', CameraParam)
sim_caltab(Image1Sim, 'calplate.cpd', CameraParam, FinalPose, 128, \
80, 224, 1)

Result
sim_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
camera_calibration, find_marks_and_pose, read_pose, read_cam_par,
hom_mat3d_to_pose
Possible Successors
find_caltab
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, create_pose,
hom_mat3d_to_pose, project_3d_point, gen_caltab
Module
Calibration

HALCON/HDevelop Reference Manual, 2024-11-13


6.3. CAMERA PARAMETERS 411

6.3 Camera Parameters


cam_mat_to_cam_par ( : : CameraMatrix, Kappa, ImageWidth,
ImageHeight : CameraParam )

Compute the internal camera parameters from a camera matrix.


cam_mat_to_cam_par computes internal camera parameters from the camera matrix CameraMatrix, the
radial distortion coefficient Kappa, the image width ImageWidth, and the image height ImageHeight.
The camera parameters are returned in CameraParam. The parameters CameraMatrix and Kappa
can be determined with stationary_camera_self_calibration. cam_mat_to_cam_par
converts this representation of the internal camera parameters into the representation used by
camera_calibration. The conversion can only be performed if the skew of the image axes is set to
0 in stationary_camera_self_calibration, i.e., if the parameter ’skew’ is not being determined.
Parameters
. CameraMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
3 × 3 projective camera matrix that determines the internal camera parameters.
. Kappa (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Kappa.
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images that correspond to CameraMatrix.
Restriction: ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .extent.y ; integer
Height of the images that correspond to CameraMatrix.
Restriction: ImageHeight > 0
. CameraParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
Example

* For the input data to stationary_camera_self_calibration, please


* refer to the example for stationary_camera_self_calibration.
stationary_camera_self_calibration (4, 640, 480, 1, From, To, \
HomMatrices2D, Rows1, Cols1, \
Rows2, Cols2, NumMatches, \
'gold_standard', \
['focus','principal_point','kappa'], \
'true', CameraMatrix, Kappa, \
RotationMatrices, X, Y, Z, Error)
cam_mat_to_cam_par (CameraMatrix, Kappa, 640, 480, CameraParam)

Result
If the parameters are valid, the operator cam_mat_to_cam_par returns the value 2 (H_MSG_TRUE). If neces-
sary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
stationary_camera_self_calibration
See also
camera_calibration, cam_par_to_cam_mat
Module
Calibration

HALCON 24.11.1.0
412 CHAPTER 6 CALIBRATION

cam_par_to_cam_mat ( : : CameraParam : CameraMatrix, ImageWidth,


ImageHeight )

Compute a camera matrix from internal camera parameters.


cam_par_to_cam_mat computes the camera matrix CameraMatrix as well as the image width
ImageWidth, and the image height ImageHeight from the internal camera parameters CameraParam.
The internal camera parameters CameraParam can be determined with camera_calibration.
cam_par_to_cam_mat converts this representation of the internal camera parameters into the representa-
tion used by stationary_camera_self_calibration. The conversion can only be performed if the
camera is an area scan pinhole camera and the distortion coefficients in CameraParam are 0. If necessary,
change_radial_distortion_cam_par must be used to set the distortion coefficients to 0.
Parameters
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CameraMatrix (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
3 × 3 projective camera matrix that corresponds to CameraParam.
. ImageWidth (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images that correspond to CameraMatrix.
Assertion: ImageWidth > 0
. ImageHeight (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the images that correspond to CameraMatrix.
Assertion: ImageHeight > 0
Example

* For the input data to calibrate_cameras, please refer to the


* example for calibrate_cameras.
calibrate_cameras (CalibDataID, Error)
get_calib_data (CalibDataID, 'camera', 0, 'params', CameraParam)
cam_par_to_cam_mat (CameraParam, CameraMatrix, ImageWidth, ImageHeight)

* Alternatively, the following calls can be used.


change_radial_distortion_cam_par ('adaptive', CameraParam, 0, CamParamOut)
cam_par_to_cam_mat (CamParamOut, CameraMatrix, ImageWidth, ImageHeight)

Result
If the parameters are valid, the operator cam_par_to_cam_mat returns the value 2 (H_MSG_TRUE). If neces-
sary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
camera_calibration
See also
stationary_camera_self_calibration, cam_mat_to_cam_par
Module
Calibration

deserialize_cam_par ( : : SerializedItemHandle : CameraParam )

Deserialize the serialized internal camera parameters.

HALCON/HDevelop Reference Manual, 2024-11-13


6.3. CAMERA PARAMETERS 413

deserialize_cam_par deserializes the internal camera parameters, that were serialized by


serialize_cam_par (see fwrite_serialized_item for an introduction of the basic principle of
serialization). The serialized camera parameters are defined by the handle SerializedItemHandle. The
deserialized values are stored in an automatically created tuple with the handle CameraParam.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. CameraParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
Result
If the parameters are valid, the operator deserialize_cam_par returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_cam_par
Module
Foundation

read_cam_par ( : : CamParFile : CameraParam )

Read internal camera parameters from a file.


read_cam_par reads the internal camera parameters CameraParam from a file with name CamParFile.
The file must have been written by write_cam_par.
The default HALCON file extension for the camera parameters is ’dat’.
The number of values in CameraParam depends on the specified camera type. See the description of
set_calib_data_cam_param for a list of values and the chapter Calibration for details on camera types
and camera parameters.
Parameters
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of internal camera parameters.
Default: ’campar.dat’
List of values: CamParFile ∈ {’campar.dat’, ’campar.initial’, ’campar.final’}
File extension: .dat
. CameraParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
Example

* Create sample camera parameters and write them to file.


gen_cam_par_area_scan_division (0.01, -731, 5.2e-006, 5.2e-006, \
654, 519, 1280, 1024, CameraParamTmp)
write_cam_par (CameraParamTmp, 'campar_tmp.dat')
* Read internal camera parameters.
read_cam_par('campar_tmp.dat', CameraParam)

Result
read_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been read success-
fully. If necessary an exception is raised.

HALCON 24.11.1.0
414 CHAPTER 6 CALIBRATION

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Successors
find_marks_and_pose, sim_caltab, gen_caltab, disp_caltab, camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
write_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation

serialize_cam_par ( : : CameraParam : SerializedItemHandle )

Serialize the internal camera parameters.


serialize_cam_par serializes the internal camera parameters (see fwrite_serialized_item for an
introduction of the basic principle of serialization). The same data that is written in a file by write_cam_par
is converted to a serialized item. The camera parameters are defined by the tuple CameraParam. The seri-
alized camera parameters are returned by the handle SerializedItemHandle and can be deserialized by
deserialize_cam_par.
Parameters

. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string


Internal camera parameters.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_cam_par returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_cam_par
Module
Foundation

write_cam_par ( : : CameraParam, CamParFile : )

Write internal camera parameters into a file.


write_cam_par stores the internal camera parameters CameraParam into a file specified by its file name
CamParFile.
The number of values in CameraParam depends on the specified camera type. See the description of
set_calib_data_cam_param for a list of values and the chapter Calibration for details on camera types
and camera parameters.

HALCON/HDevelop Reference Manual, 2024-11-13


6.4. HAND-EYE 415

The default HALCON file extension for the camera parameters is ’dat’.
The internal camera parameters can be later read with read_cam_par.
Parameters
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of internal camera parameters.
Default: ’campar.dat’
List of values: CamParFile ∈ {’campar.dat’, ’campar.initial’, ’campar.final’}
File extension: .dat
Example

*
* Calibrate the camera.
*
StartCamPar := ['area_scan_division', 0.016, 0, 0.0000074, 0.0000074, \
326, 247, 652, 494]
create_calib_data ('calibration_object', 1, 1, CalibDataID)
set_calib_data_cam_param (CalibDataID, 0, [], StartCamPar)
set_calib_data_calib_object (CalibDataID, 0, 'caltab_30mm.descr')
NumImages := 10
for I := 1 to NumImages by 1
read_image (Image, '3d_machine_vision/calib/calib_' + I$'02d')
find_calib_object (Image, CalibDataID, 0, 0, I, [], [])
get_calib_data_observ_contours (Caltab, CalibDataID, 'caltab', 0, 0, I)
endfor
calibrate_cameras (CalibDataID, Error)
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Write the internal camera parameters to a file.
write_cam_par (CamParam, 'camera_parameters.dat')

Result
write_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been written
successfully. If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
read_cam_par, write_pose, read_pose, project_3d_point, get_line_of_sight
Module
Foundation

6.4 Hand-Eye

calibrate_hand_eye ( : : CalibDataID : Errors )

Perform a hand-eye calibration.

HALCON 24.11.1.0
416 CHAPTER 6 CALIBRATION

The operator calibrate_hand_eye determines the 3D pose of a robot (“hand”) relative to a camera or 3D
sensor (“eye”) based on the calibration data model CalibDataID. With the determined 3D poses, the poses of
the calibration object in the camera coordinate system can be transformed into the coordinate system of the robot
which can then, e.g., grasp an inspected part. There are two possible configurations of robot-camera (hand-eye)
systems: The camera can be mounted on the robot or be stationary and observe the robot. Note that the term robot
is used in place of a mechanism that moves objects. Thus, you can use calibrate_hand_eye to calibrate
many different systems, from pan-tilt heads to multi-axis manipulators.
In essence, systems suitable for hand-eye calibration are described by a closed chain of four Euclidean transforma-
tions. In this chain two non-consecutive transformations are either known from the robot controller or computed
from camera data, e.g., calibration object poses observed by a camera. The two unknown constant transformations
are computed by the hand-eye calibration procedure.
A hand-eye calibration is performed similarly to the calibration of the external camera parameters (see Calibration):
You acquire a set of poses of a calibration object in the camera coordinate system, and a corresponding set of poses
of the tool in robot base coordinates and set them in the calibration data model CalibDataID.
In contrast to the camera calibration, the calibration object is not moved manually. This task is delegated to
the robot. Basically, two hand-eye calibration scenarios can be distinguished. A robot either moves the camera
(moving camera) or it moves the calibration object (stationary camera). The robot’s movements are assumed
to be known. They are used as an input for the hand-eye calibration and are set in the calibration data model
CalibDataID using set_calib_data.
The results of a hand-eye calibration are two poses: For the moving camera scenario, the 3D pose of the tool in
the camera coordinate system (’tool_in_cam_pose’) and the 3D pose of the calibration object in the robot base
coordinate system (’obj_in_base_pose’) are computed. For the stationary camera scenario, the 3D pose of the
robot base in the camera coordinate system (’base_in_cam_pose’) and the 3D pose of the calibration object in the
tool coordinate system (’obj_in_tool_pose’) are computed. Their pose type is identical to the pose type of the input
poses. If the input poses have different pose types, poses of type 0 are returned.
The two hand-eye calibration scenarios are discussed in more detail below, followed by general information about
the data for and the preparation of the calibration data model.
Moving camera (mounted on a robot)
In this configuration, the calibration object remains stationary. The camera is mounted on the robot and is moved
to different positions by the robot. The main idea behind the hand-eye calibration is that the information extracted
from an observation of the calibration object, i.e., the pose of the calibration object relative to the camera, can be
seen as a chain of poses or homogeneous transformation matrices from the calibration object via the base of the
robot to its tool (end-effector) and finally to the camera:

Moving camera: camera


Hcal = camera Htool ·(base Htool )−1 ·base Hcal
 * 
* 6 YH
H
  H
’obj_in_cam_pose’ ’tool_in_cam_pose’ ’tool_in_base_pose’ ’obj_in_base_pose’

From the set of calibration object poses (’obj_in_cam_pose’) and the poses of the tool in the robot base coordi-
nate system (’tool_in_base_pose’), the operator calibrate_hand_eye determines the two missing transfor-
mations at the ends of the chain, i.e., the pose of the robot tool in the camera coordinate system (camera Htool ,
’tool_in_cam_pose’) and the pose of the calibration object in the robot base coordinate system (base Hcal ,
’obj_in_base_pose’). These two poses are constant.
In contrast, the transformation in the middle of the chain, base Htool , is known but changes for each observation of
the calibration object, because it describes the pose of the tool with respect to the robot base coordinate system. In
the equation the inverted transformation matrix is used. The inversion is performed internally.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of ’obj_in_base_pose’.
To eliminate this ambiguity the Z translation ’obj_in_base_pose’ is internally set to 0.0 and the ’tool_in_cam_pose’
is calculated accordingly. It is necessary to determine the true translation in Z after the calibration by moving the
robot to a pose of known height in the camera coordinate system. For this, the following approach can be applied:
The calibration plate is placed at an arbitrary position. The robot is then moved such that the camera can observe
the calibration plate. Now, an image of the calibration plate is acquired and the current robot pose is queried
(ToolInBasePose1). From the image, the pose of the calibration plate in the camera coordinate system can be

HALCON/HDevelop Reference Manual, 2024-11-13


6.4. HAND-EYE 417

determined (ObjInCamPose1). Afterwards, the tool of the robot is manually moved to the origin of the calibration
plate and the robot pose is queried again (ToolInBasePose2). These three poses and the result of the calibration
(ToolInCamPose) can be used to fix the Z ambiguity by using the following lines of code:

pose_invert(ToolInCamPose, CamInToolPose)
pose_compose(CamInToolPose, ObjInCamPose1, ObjInToolPose1)
pose_invert(ToolInBasePose1, BaseInToolPose1)
pose_compose(BaseInToolPose1, ToolInBasePose2, Tool2InTool1Pose)
ZCorrection := ObjInToolPose1[2]-Tool2InTool1Pose[2]
set_origin_pose(ToolInCamPose, 0, 0, ZCorrection,
ToolInCamPoseFinal)

The ’optimization_method’ ’stochastic’ also estimates the uncertainty of observations. Besides the input poses
described above, it also uses the extracted calibration marks and is thus only available for use with a camera and a
calibration plate, not for use with a 3D sensor. For articulated robots, the hand-eye poses and camera parameters
are refined simultaneously.
Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from an observation of the calibration object, i.e., the pose of the calibration object in the
camera coordinate system (e.g., the external camera parameters), are equal to a chain of poses or homogeneous
transformation matrices, this time from the calibration object via the robot’s tool to its base and finally to the
camera:

camera
Stationary camera: Hcal = camera Hbase · base Htool · tool Hcal
 * * 6 YH
H
  H
’obj_in_cam_pose’ ’base_in_cam_pose’ ’tool_in_base_pose’ ’obj_in_tool_pose’

Analogously to the configuration with a moving camera, the operator calibrate_hand_eye determines the
two transformations at the ends of the chain, here the pose of the robot base coordinate system in camera coordi-
nates (camera Hbase , ’base_in_cam_pose’) and the pose of the calibration object relative to the robot tool (tool Hcal ,
’obj_in_tool_pose’).
The transformation in the middle of the chain, base Htool , describes the pose of the tool relative to the robot base
coordinate system. The transformation camera Hcal describes the pose of the calibration object relative to the
camera coordinate system.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of ’obj_in_tool_pose’.
To eliminate this ambiguity the Z translation of ’obj_in_tool_pose’ is internally set to 0.0 and the
’base_in_cam_pose’ is calculated accordingly. It is necessary to determine the true translation in Z after the
calibration by moving the robot to a pose of known height in the camera coordinate system. For this, the following
approach can be applied: A calibration plate (that is not attached to the robot) is placed at an arbitrary position
such that it can be observed by the camera. The pose of the calibration plate must then be determined in the cam-
era coordinate system (ObjInCamPose). Afterwards the tool of the robot is manually moved to the origin of the
calibration plate and the robot pose is queried (ToolInBasePose). The two poses and the result of the calibration
(BaseInCamPose) can be used to fix the Z ambiguity by using the following lines of code:

pose_invert(BaseInCamPose, CamInBasePose)
pose_compose(CamInBasePose, ObjInCamPose, ObjInBasePose)
ZCorrection := ObjInBasePose[2]-ToolInBasePose[2]
set_origin_pose(BaseInCamPose, 0, 0, ZCorrection,
BaseInCamPoseFinal)

The ’optimization_method’ ’stochastic’ also estimates the uncertainty of observations. Besides the input poses
described above, it also uses the extracted calibration marks and is thus only available for use with a camera and a

HALCON 24.11.1.0
418 CHAPTER 6 CALIBRATION

calibration plate, not for use with a 3D sensor. For articulated robots, the hand-eye poses and camera parameters
are refined simultaneously.
Preparing the calibration input data
Before calling calibrate_hand_eye, you must create and fill the calibration data model with the following
steps:

1. Create a calibration data model with the operator create_calib_data, specifying the num-
ber of cameras in the setup and the number of used calibration objects. Depending on your
scenario, CalibSetup has to be set to ’hand_eye_moving_camera’, ’hand_eye_stationary_camera’,
’hand_eye_scara_moving_camera’, or ’hand_eye_scara_stationary_camera’. These four scenarios on the
one hand distinguish whether the camera or the calibration object is moved by the robot and on the other
hand distinguish whether an articulated robot or a SCARA robot is calibrated. The arm of an articulated
robot has three rotary joints typically covering 6 degrees of freedom (3 translations and 3 rotations). SCARA
robots have two parallel rotary joints and one parallel prismatic joint covering only 4 degrees of freedom
(3 translations and 1 rotation). Loosely speaking, an articulated robot is able to tilt its end effector while a
SCARA robot is not.
2. Specify the optimization method with the operator set_calib_data. For the parameter
DataName=’optimization_method’, three options for DataValue are available, DataValue=’linear’,
DataValue=’nonlinear’ and DataValue=’stochastic’ (see paragraph ’Performing the actual hand-eye
calibration’).
3. Specify the poses of the calibration object
(a) For each observation of the calibration object, the 3D pose can be set directly using the operator
set_calib_data_observ_pose. This operator is intended to be used with generic 3D sensors
that observe the calibration object.
(b) The pose of the calibration object can also be estimated using camera images. The cali-
bration object has to be set in the calibration data model CalibDataID with the operator
set_calib_data_calib_object. Initial camera parameters have to be set with the operator
set_calib_data_cam_param. If a standard HALCON calibration plate is used, the operator
find_calib_object determines the pose of the calibration plate relative to the camera and saves it
in the calibration data model CalibDataID.
The operator calibrate_hand_eye for articulated (i.e., non-SCARA) robots in
this case calibrates the camera before performing the hand-eye calibration. If ’op-
timization_method’ is set to ’stochastic’, the hand-eye poses and camera parameters
are then refined simultaneously. If the provided camera parameters are already cal-
ibrated, the camera calibration can be switched off by calling set_calib_data
(CalibDataID,’camera’,’general’,’excluded_settings’,’params’).
In contrast, for SCARA robots calibrate_hand_eye always assumes that the provided camera
parameters are already calibrated. Therefore, in this case the internal camera calibration is never per-
formed automatically during hand-eye calibration. This is because the internal camera parameters cannot
be calibrated reliably without significantly tilting the calibration plate with respect to the camera. For
hand-eye calibration, the calibration plate is often approximately parallel to the image plane. Therefore,
for SCARA robots all camera poses are approximately parallel. Therefore, the camera must be calibrated
beforehand by using a different set of calibration images.
4. Specify the poses of the tool in robot base coordinates. For each pose of the calibration object in
the camera coordinate system, the corresponding pose of the tool in the robot base coordinate sys-
tem has to be set with the operator set_calib_data(CalibDataID,’tool’, PoseNumber,
’tool_in_base_pose’, ToolInBasePose).

Performing the actual hand-eye calibration


The operator calibrate_hand_eye can perform the calibration in three different ways. In all cases, all pro-
vided calibration object poses in camera coordinates and the corresponding poses of the tool in robot base coor-
dinates are used for the calibration. The method ’stochastic’ also uses the extracted calibration marks, and is thus
only available for use with a camera and a calibration plate, not for use with a 3D sensor. The method to be used
is specified with set_calib_data.
For the parameter combination DataName=’optimization_method’ and DataValue=’linear’, the calibration is
performed using a linear algorithm which is fast but in many practical situations not accurate enough.

HALCON/HDevelop Reference Manual, 2024-11-13


6.4. HAND-EYE 419

For the parameter DataName=’optimization_method’ and DataValue=’nonlinear’, the calibration is performed


using a non-linear algorithm, which results in more accurately calibrated poses.
For the parameter DataName=’optimization_method’ and DataValue=’stochastic’, the calibration algorithm
models the uncertainty of all measured observations including the input robot poses, which results in more robustly
calibrated hand-eye poses. The estimation will be better the more input poses are used. However, the method is
only available for use with a camera and a calibration plate, not for use with a 3D sensor. For articulated robots,
the hand-eye poses and camera parameters are refined simultaneously.
Checking the success of the calibration
The operator calibrate_hand_eye returns the pose error of the complete chain of transformations in
Errors. To be more precise, a tuple with four elements is returned, where the first element is the root-mean-
square error of the translational part, the second element is the root-mean-square error of the rotational part, the
third element is the maximum translational error and the fourth element is the maximum rotational error. Using
these error measures, it can be determined, whether the calibration was successful.
The Errors are returned in the same units in which the input poses were given, i.e., the translational errors are
typically given in meters and the rotational errors are always given in degrees.
If ’optimization_method’ is set to ’stochastic’, get_calib_data can be used to obtain
’hand_eye_calib_error_corrected_tool’, which differs from Errors only in that it uses the corrected robot
tool poses instead of the input robot tool poses.
For articulated robots, get_calib_data can be used to obtain the ’camera_calib_error’ of the camera cali-
bration, the root mean square error (RMSE) of the direct back projection of calibration mark centers into camera
images. If ’optimization_method’ is set to ’stochastic’, ’camera_calib_error_corrected_tool’ returns the back pro-
jection error via the pose chain using corrected tool poses.
Getting the calibration results
The poses that are computed with the operator calibrate_hand_eye can be queried with
get_calib_data. For the moving camera scenario, the 3D pose of the tool in the camera coordinate
system (’tool_in_cam_pose’) and the 3D pose of the calibration object in the robot base coordinate system
(’obj_in_base_pose’) can be obtained. For the stationary camera scenario, the 3D pose of the robot base in the
camera coordinate system (’base_in_cam_pose’) and the 3D pose of the calibration object in the coordinate
system of the tool (’obj_in_tool_pose’) can be obtained.
Querying the input data
If the poses of the calibration object relative to a camera were computed with find_calib_object, then for
articulated (i.e., non-SCARA) robots they are used in an internal camera calibration step preceding the hand-eye
calibration and are calibrated as well. For ’optimization_method’ set to ’stochastic’, the hand-eye poses and camera
parameters are refined simultaneously, the poses of the calibration object are then updated relative to the resulting
new camera parameters. The calibrated 3D poses can be queried using get_calib_data with the parameter
ItemType=’calib_obj_pose’.
If the poses of the calibration object were observed with a generic 3D sensor, they cannot be cali-
brated and are set by set_calib_data_observ_pose. These raw 3D poses can be queried using
get_calib_data_observ_pose.
The corresponding 3D poses of the tool in the coordinate system of the robot base can be queried using
get_calib_data.
Acquiring a suitable set of observations
The following conditions, especially if using a standard calibration plate, should be considered:

• The position of the calibration object (moving camera: relative to the robot’s base; stationary camera: relative
to the robot’s tool) and the position of the camera (moving camera: relative to the robot’s tool; stationary
camera: relative to the robot’s base) must not be changed between the calibration poses.
• Even though a lower limit of three calibration object poses is theoretically possible, it is recommended to
acquire 10 or more poses, in which the pose of the camera or the robot hand are sufficiently different. If
’optimization_method’ is set to ’stochastic’, at least 25 poses are recommended. The estimation will be better
the more poses are used.
For articulated (i.e., non-SCARA) robots the amount of rotation between the calibration object poses is
essential and should be at least 30 degrees or better 60 degrees. The rotations between the poses must exhibit
at least two different axes of rotation. Very different orientations lead to more precise results of the hand-eye

HALCON 24.11.1.0
420 CHAPTER 6 CALIBRATION

calibration. For SCARA robots there is only one axis of rotation. The amount of rotation between the images
should also be large.
• For cameras, the internal camera parameters must be constant during and after the calibration. Note that
changes of the image size, the focal length, the aperture, or the focus cause a change of the internal camera
parameters.
• As mentioned, the camera must not be modified between the acquisition of the individual images. Please
make sure that the focus is sufficient for the expected changes of the camera to calibration plate distance.
Therefore, bright lighting conditions for the calibration plate are important, because then you can use smaller
apertures, which result in a larger depth of focus.

Obtaining the poses of the robot tool


We recommend to create the robot poses in a separate program and save them in files using write_pose. In the
calibration program you can then import them and set them in the calibration data model CalibDataID.
Via the Cartesian interface of the robot, you can typically obtain the pose of the tool in robot base coordinates in
a notation that corresponds to the pose representations with the codes 0 or 2 (OrderOfRotation = ’gba’ or
’abg’, see create_pose). In this case, you can directly use the pose values obtained from the robot as input for
create_pose.
If the Cartesian interface of your robot describes the orientation in a different way, e.g., with the representation
ZYZ (Rz (ϕ1) · Ry (ϕ2) · Rz (ϕ3)), you can create the corresponding homogeneous transformation matrix step by
step using the operators hom_mat3d_rotate and hom_mat3d_translate and then convert the resulting
matrix into a pose using hom_mat3d_to_pose. The following example code creates a pose from the ZYZ
representation described above:

hom_mat3d_identity(HomMat3DIdent)
hom_mat3d_rotate(HomMat3DIdent, phi3, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate(HomMat3DRotZ, phi2, ’y’, 0, 0, 0, HomMat3DRotZY)
hom_mat3d_rotate(HomMat3DRotZY, phi1, ’z’, 0, 0, 0,
HomMat3DRotZYZ)
hom_mat3d_translate(HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose(base_H_tool, RobPose)

Please note that the hand-eye calibration only works if the poses of the tool in robot base coordinates are specified
with high accuracy. Of the provided methods, ’optimization_method’ set to ’stochastic’ will yield the most robust
results with respect to noise on the poses of the tool in robot base coordinates. The estimation will be better the
more input poses are used.
Please note that this operator supports canceling timeouts and interrupts if ’optimization_method’ is set to ’stochas-
tic’.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Average residual error of the optimization.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator modifies the state of the following input parameter:

• CalibDataID

HALCON/HDevelop Reference Manual, 2024-11-13


6.4. HAND-EYE 421

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_calib_data, set_calib_data_cam_param, set_calib_data_calib_object,
set_calib_data_observ_pose, find_calib_object, set_calib_data,
remove_calib_data, remove_calib_data_observ
Possible Successors
get_calib_data
References
K. Daniilidis: “Hand-Eye Calibration Using Dual Quaternions”; International Journal of Robotics Research, Vol.
18, No. 3, pp. 286-298; 1999.
M. Ulrich, C. Steger: “Hand-Eye Calibration of SCARA Robots Using Dual Quaternions”; Pattern Recognition
and Image Analysis, Vol. 26, No. 1, pp. 231-239; January 2016.
M. Ulrich, M. Hillemann: “Generic Hand–Eye Calibration of Uncertain Robots”; 2021 IEEE International Con-
ference on Robotics and Automation (ICRA), pp. 11060-11066; 2021.
Module
Calibration

get_calib_data_observ_pose ( : : CalibDataID, CameraIdx,


CalibObjIdx, CalibObjPoseIdx : ObjInCameraPose )

Get observed calibration object poses from a calibration data model.


The operator get_calib_data_observ_pose reads the poses of the calibration object given in the camera
coordinate system from a calibration data model CalibDataID. The observation data was previously stored by
set_calib_data_observ_pose, find_calib_object, or set_calib_data_observ_points.
Note that if the model CalibDataID uses a general sensor and no calibration object (i.e., the model was created
by create_calib_data with NumCameras=0 and NumCalibObjects=0), then both CameraIdx and
CalibObjIdx must be set to 0.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object.
Default: 0
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object pose.
Default: 0
. ObjInCameraPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Stored observed calibration object pose relative to the observing camera.
Number of elements: 7
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Calibration

HALCON 24.11.1.0
422 CHAPTER 6 CALIBRATION

hand_eye_calibration ( : : X, Y, Z, Row, Col, NumPoints,


RobotPoses, CameraParam, Method, QualityType : CameraPose,
CalibrationPose, Quality )

Perform a hand-eye calibration.


The operator hand_eye_calibration determines the 3D pose of a robot (“hand”) relative to a camera
(“eye”). With this information, the results of image processing can be transformed into the coordinate system of
the robot which can then, e.g., grasp an inspected part. Please note that the operator hand_eye_calibration
does not support 3D sensors. A hand-eye calibration including 3D sensors is only supported by the operator
calibrate_hand_eye. That operator furthermore provides a more user-friendly interface to the hand-eye
calibration than the operator hand_eye_calibration, since the reference coordinate systems are explicitly
indicated.
There are two possible configurations of robot-camera (hand-eye) systems: The camera can be mounted on the
robot or be stationary and observe the robot. Note that the term robot is used in place for a mechanism that moves
objects. Thus, you can use hand_eye_calibration to calibrate many different systems, from pan-tilt heads
to multi-axis manipulators.
In essence, systems suitable for hand-eye calibration are described by a closed chain of four Euclidean transforma-
tions. In this chain two non-consecutive transformations are either known from the robot controller or computed
from calibration points seen by a camera system. The two other constant transformations are computed by the
hand-eye calibration procedure.
A hand-eye calibration is performed similarly to the calibration of the external camera parameters (see
camera_calibration): You acquire a set of images of a calibration object, determine correspondences be-
tween known calibration points and their projection in the images and pass them to hand_eye_calibration
via the parameters X, Y, Z, Row, Col, and NumPoints. If you use the standard calibration plate, the corre-
spondences can be determined very easily with the operators find_caltab and find_marks_and_pose.
Furthermore, the camera is identical for the complete calibration sequence and is specified by the internal camera
parameters in CameraParam. The internal camera parameters are calibrated beforehand deploying the operator
calibrate_cameras or camera_calibration.
In contrast to the camera calibration, the calibration object is not moved manually. This task is delegated to
the robot, which either moves the camera (mounted camera) or the calibration object (stationary camera). The
robot’s movements are assumed to be known and therefore are also used as an input for the calibration (parameter
RobotPoses).
The output of hand_eye_calibration are the two poses CameraPose and CalibrationPose. Their
pose type is identical to the pose type of the first input robot pose.
Basically, two hand-eye configurations can be distinguished and are discussed in more detail below, followed by
general information about the process of hand-eye calibration.
Moving camera (mounted on a robot)
In this configuration, the calibration object remains stationary and the camera is moved to different positions by the
robot. The main idea behind the hand-eye calibration is that the information extracted from a calibration image,
i.e., the pose of the calibration object relative to the camera (i.e., the external camera parameters), can be seen as a
chain of poses or homogeneous transformation matrices, from the calibration object via the base of the robot to its
tool (end-effector) and finally to the camera:

cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
*
 YH
H
 6 H
CameraPose RobotPoses CalibrationPose

From the set of calibration images, the operator hand_eye_calibration determines the two transformations
at the ends of the chain, i.e., the pose of the robot tool in camera coordinates (cam Htool ,CameraPose) and the
pose of the calibration object in the robot base coordinate system (base Hcal ,CalibrationPose).
In contrast, the transformation in the middle of the chain, tool Hbase , is known but changes for each calibration
image, because it describes the pose of the robot moving the camera, or to be more exact its inverse pose (pose of

HALCON/HDevelop Reference Manual, 2024-11-13


6.4. HAND-EYE 423

the base coordinate system in robot tool coordinates). You must specify the inverse robot poses in the calibration
images in the parameter RobotPoses.
Note that when calibrating SCARA robots it is not possible to determine the Z translation of CalibrationPose.
To eliminate this ambiguity the Z translation of CalibrationPose is internally set to 0.0 and the CameraPose
is calculated accordingly. It is necessary to determine the true translation in Z after the calibration (see
calibrate_hand_eye).
Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from a calibration image, i.e., the pose of the calibration object in camera coordinates (the
external camera parameters), are equal to a chain of poses or homogeneous transformation matrices, this time from
the calibration object via the robot’s tool to its base and finally to the camera:

cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
* 6 YH
H
 H
CameraPose RobotPoses CalibrationPose

Analogously to the configuration with a moving camera, the operator hand_eye_calibration determines
the two transformations at the ends of the chain, here the pose of the robot base coordinate system in cam-
era coordinates (cam Hbase ,CameraPose) and the pose of the calibration object relative to the robot tool
(tool Hcal ,CalibrationPose).
The transformation in the middle of the chain, base Htool , describes the pose of the robot moving the calibration
object, i.e., the pose of the tool relative to the base coordinate system. You must specify the robot poses in the
calibration images in the parameter RobotPoses.
Note that when calibrating SCARA robots it is not possible to determine the Z translation of CalibrationPose.
To eliminate this ambiguity the Z translation of CalibrationPose is internally set to 0.0 and the CameraPose
is calculated accordingly. It is necessary to determine the true translation in Z after the calibration (see
calibrate_hand_eye).
Additional information about the calibration process
The following sections discuss individual questions arising from the use of hand_eye_calibration. They
are intended to be a guideline for using the operator in an application, as well as to help understanding the operator.

How do I get 3D calibration points and their projections? 3D calibration points given in the world coordinate
system (X, Y, Z) and their associated projections in the image (Row, Col) form the basis of the hand-eye
calibration. In order to be able to perform a successful hand-eye calibration, you need at least three images of
the 3D calibration points that were obtained under different poses of the manipulator. In each image at least
four points must be available, in order to compute internally the pose transferring the calibration points from
their world coordinate system into the camera coordinate system.
In principle, you can use arbitrary known points for the calibration. However, it is usually most convenient
to use the standard calibration plate, e.g., the one that can be generated with gen_caltab. In this case,
you can use the operators find_caltab and find_marks_and_pose to extract the position of the
calibration plate and of the calibration marks and the operator caltab_points to read the 3D coordinates
of the calibration marks (see also the description of camera_calibration).
The parameter NumPoints specifies the number of 3D calibration points used for each pose of the manip-
ulator, i.e., for each image. With this, the 3D calibration points which are stored in a linearized fashion in
X, Y, Z, and their corresponding projections (Row, Col) can be associated with the corresponding pose of
the manipulator (RobotPoses). Note that in contrast to the operator camera_calibration the 3D
coordinates of the calibration points must be specified for each calibration image, not only once, and thus can
vary for each image of the sequence.
How do I acquire a suitable set of images? The following conditions, especially if using a standard calibration
plate, should be considered:
• The position of the calibration marks (moving camera: relative to the robot’s base; stationary camera:
relative to the robot’s tool) and the position of the camera (moving camera: relative to the robot’s tool;
stationary camera: relative to the robot’s base) must not be changed between the images.

HALCON 24.11.1.0
424 CHAPTER 6 CALIBRATION

• The internal camera parameters (CameraParam) must be constant and must be determined in a previ-
ous camera calibration step (see camera_calibration). Note that changes of the image size, the
focal length, the aperture, or the focus cause a change of the internal camera parameters.
• The theoretical lower limit of the number of image to acquire is three. Nevertheless, it is recommended
to have 10 or more images at hand, in which the position of the camera or the robot hand are sufficiently
different.
For articulated (i.e., non-SCARA) robots the amount of rotation between the images is essential and
should be at least 30 degrees or better 60 degrees. The rotations between the images must exhibit at least
two different axes of rotation. Very different orientations lead to precise calibration results. For SCARA
robots there is only one axis of rotation. The amount of rotation between the images should also be large.
• In each image, the calibration plate must be completely visible (including its border).
• Reflections or other disturbances should not impair the detection of the calibration plate and its calibra-
tion marks.
• If individual calibration marks instead of the standard calibration plate are used at least four marks must
be present in each image.
• In each image, the calibration plate should at least fill one quarter of the entire image for a precise
computation of the calibration to camera transformation, which is performed internally during hand-eye
calibration.
• As mentioned, the camera must not be modified between the acquisition of the individual images. Please
make sure that the focus is sufficient for the expected changes of the camera to calibration plate distance.
Therefore, bright lighting conditions for the calibration plate are important, because then you can use
smaller apertures, which result in a larger depth of focus.
How do I obtain the poses of the robot? In the parameter RobotPoses you must pass the poses of the robot in
the calibration images (moving camera: pose of the robot base in robot tool coordinates; stationary camera:
pose of the robot tool in robot base coordinates) in a linearized fashion. We recommend to create the robot
poses in a separate program and save in files using write_pose. In the calibration program you can then
read and accumulate them in a tuple as shown in the example program below. In addition, we recommend to
save the pose of the robot tool in robot base coordinates independent of the hand-eye configuration. When
using a moving camera, you then invert the read poses before accumulating them. This is also shown in the
example program.
Via the Cartesian interface of the robot, you can typically obtain the pose of the tool in base coordinates in
a notation that corresponds to the pose representations with the codes 0 or 2 (OrderOfRotation = ’gba’
or ’abg’, see create_pose). In this case, you can directly use the pose values obtained from the robot as
input for create_pose.
If the Cartesian interface of your robot describes the orientation in a different way, e.g., with the representation
ZYZ (Rz (ϕ1) · Ry (ϕ2) · Rz (ϕ3)), you can create the corresponding homogeneous transformation matrix
step by step using the operators hom_mat3d_rotate and hom_mat3d_translate and then convert
the matrix into a pose using hom_mat3d_to_pose. The following example code creates a pose from the
ZYZ representation described above:

hom_mat3d_identity(HomMat3DIdent)
hom_mat3d_rotate(HomMat3DIdent, phi3, ’z’, 0, 0, 0,
HomMat3DRotZ)
hom_mat3d_rotate(HomMat3DRotZ, phi2, ’y’, 0, 0, 0,
HomMat3DRotZY)
hom_mat3d_rotate(HomMat3DRotZY, phi1, ’z’, 0, 0, 0,
HomMat3DRotZYZ)
hom_mat3d_translate(HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose(base_H_tool, RobPose)
Please note that the hand-eye calibration only works if the robot poses RobotPoses are specified with high
accuracy!
What is the order of the individual parameters? The length of the tuple NumPoints corresponds to the num-
ber of different positions of the manipulator and thus to the number of calibration images. The parameter
NumPoints determines the number of calibration points used in the individual positions. If the standard
calibration plate is used, this means 49 points per position (image). If, for example, 15 images were acquired,
NumPoints is a tuple of length 15, where all elements of the tuple have the value 49.

HALCON/HDevelop Reference Manual, 2024-11-13


6.4. HAND-EYE 425

The number of images in the sequence, which is determined by the length of NumPoints, must also be taken
into account for the tuples of the 3D calibration points and the extracted 2D marks, respectively. Hence,
for 15 calibration images with 49 calibration points each, the tuples X, Y, Z, Row, and Col must contain
15 · 49 = 735 values each. These tuples are ordered according to the image the respective points lie in, i.e.,
the first 49 values correspond to the 49 calibration points in the first image. The order of the 3D calibration
points and the extracted 2D calibration points must be the same in each image.
The length of the tuple RobotPoses also depends on the number of calibration images. If, for example, 15
images and therefore 15 poses are used, the length of the tuple RobotPoses is 15 · 7 = 105 (15 times 7
pose parameters). The first seven parameters thus determine the pose of the manipulator in the first image,
and so on.
Algorithm and output parameters The parameter Method determines the type of algorithm used for the hand-
eye calibration: With ’linear’ a linear algorithm is chosen, which is fast but in many practical situations not
accurate enough. ’nonlinear’ selects a non-linear algorithm, which results in the most accurately calibrated
poses and which is the method of choice.
For the calibration of SCARA robots the parameter Method must be set to ’scara_linear’ or
’scara_nonlinear’, respectively. While the arm of an articulated robot has three rotary joints typically cov-
ering 6 degrees of freedom (3 translations and 3 rotations), SCARA robots have two parallel rotary joints
and one parallel prismatic joint covering only 4 degrees of freedom (3 translations and 1 rotation). Loosely
speaking, an articulated robot is able to tilt its end effector while a SCARA robot is not.
The parameter QualityType switches between different possibilities for assessing the quality of the cali-
bration result returned in Quality. ’error_pose’ stands for the pose error of the complete chain of transfor-
mations. To be more precise, a tuple with four elements is returned, where the first element is the root-mean-
square error of the translational part, the second element is the root-mean-square error of the rotational part,
the third element is the maximum translational error and the fourth element is the maximum rotational error.
With ’standard_deviation’, a tuple with 12 elements containing the standard deviations of the two poses is
returned: The first six elements refer to the camera pose and the others to the pose of the calibration points.
With ’covariance’, the full 12x12 covariance matrix of both poses is returned. Like poses, the standard devi-
ations and the covariances are specified in the units [m] and [°]. Note that selecting ’linear’ or ’scara_linear’
for the parameter Method enables only the output of the pose error (’error_pose’).

Parameters
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the x coordinates of the calibration points (in the order of the images).
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the y coordinates of the calibration points (in the order of the images).
. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the z coordinates of the calibration points (in the order of the images).
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all row coordinates of the calibration points (in the order of the images).
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the column coordinates of the calibration points (in the order of the images).
. NumPoints (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of the calibration points for each image.
. RobotPoses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Known 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates; stationary
camera: robot tool in robot base coordinates).
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method of hand-eye calibration.
Default: ’nonlinear’
List of values: Method ∈ {’linear’, ’nonlinear’, ’scara_linear’, ’scara_nonlinear’}
. QualityType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Type of quality assessment.
Default: ’error_pose’
List of values: QualityType ∈ {’error_pose’, ’standard_deviation’, ’covariance’}

HALCON 24.11.1.0
426 CHAPTER 6 CALIBRATION

. CameraPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer


Computed relative camera pose: 3D pose of the robot tool (moving camera) or robot base (stationary camera),
respectively, in camera coordinates.
. CalibrationPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Computed 3D pose of the calibration points in robot base coordinates (moving camera) or in robot tool
coordinates (stationary camera), respectively.
. Quality (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Quality assessment of the result.
Example

* Note that, in order to use this code snippet, you must provide
* the camera parameters, the calibration plate description file,
* the calibration images, and the robot poses.
read_cam_par('campar.dat', CameraParam)
CalDescr := 'caltab.descr'
caltab_points(CalDescr, X, Y, Z)
* Process all calibration images.
for i := 0 to NumImages-1 by 1
read_image(Image, 'calib_'+i$'02d')
* Find marks on the calibration plate in every image.
find_caltab(Image, CalPlate, CalDescr, 3, 150, 5)
find_marks_and_pose(Image, CalPlate, CalDescr, CameraParam, 128, 10, 18, \
0.9, 15, 100, RCoordTmp, CCoordTmp, StartPose)
* Accumulate 2D and 3D coordinates of the marks.
RCoord := [RCoord, RCoordTmp]
CCoord := [CCoord, CCoordTmp]
XCoord := [XCoord, X]
YCoord := [YCoord, Y]
ZCoord := [ZCoord, Z]
NumMarker := [NumMarker, |RCoordTmp|]
* Read pose of the robot tool in robot base coordinates.
read_pose('robpose_'+i$'02d'+'.dat', RobPose)
* Moving camera? Invert pose.
if (IsMovingCameraConfig == 'true')
pose_to_hom_mat3d(RobPose, base_H_tool)
hom_mat3d_invert(base_H_tool, tool_H_base)
hom_mat3d_to_pose(tool_H_base, RobPose)
endif
* Accumulate robot poses.
RobotPoses := [RobotPoses, RobPose]
endfor
*
* Perform hand-eye calibration.
*
hand_eye_calibration(XCoord, YCoord, ZCoord, RCoord, CCoord, NumMarker, \
RobotPoses, CameraParam, 'nonlinear', 'error_pose', \
CameraPose, CalibrationPose, Error)

Result
The operator hand_eye_calibration returns the value 2 (H_MSG_TRUE) if the given parameters are correct.
Otherwise, an exception will be raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


6.4. HAND-EYE 427

Possible Predecessors
find_marks_and_pose, camera_calibration, calibrate_cameras
Possible Successors
write_pose, convert_pose_type, pose_to_hom_mat3d, disp_caltab, sim_caltab
Alternatives
calibrate_hand_eye
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab,
calibrate_hand_eye
References
K. Daniilidis: “Hand-Eye Calibration Using Dual Quaternions”; International Journal of Robotics Research, Vol.
18, No. 3, pp. 286-298; 1999.
M. Ulrich, C. Steger: “Hand-Eye Calibration of SCARA Robots Using Dual Quaternions”; Pattern Recognition
and Image Analysis, Vol. 26, No. 1, pp. 231-239; January 2016.
Module
Calibration

set_calib_data_observ_pose ( : : CalibDataID, CameraIdx,


CalibObjIdx, CalibObjPoseIdx, ObjInCameraPose : )

Set observed calibration object poses in a calibration data model.


For a calibration data model of type CalibSetup=’hand_eye_moving_cam’, ’hand_eye_stationary_cam’,
’hand_eye_scara_moving_cam’, or ’hand_eye_scara_stationary_cam’ with no calibration object (see
create_calib_data), the hand-eye calibration is based on so-called observations of an arbitrary object
in the camera coordinate system. In the following this object will be called calibration object. Addition-
ally, the corresponding poses of the robot tool in the robot base coordinate system must be known. With
set_calib_data_observ_pose, you store an observation of the calibration object pose in the calibration
data model CalibDataID. An observation of the calibration object pose consists of the following data:

CameraIdx: Index of the observing camera


CalibObjIdx: Index of the observed calibration object
CalibObjPoseIdx: Index of the observed pose of the calibration object. You can choose it freely, without
following a strict order. If you specify an index that already exists for the calibration object CalibObjIdx,
the corresponding observation data is replaced by the new one.
ObjInCameraPose: Pose of the observed calibration object relative to observing camera.

Note that, since the model CalibDataID uses a general sensor and no calibration object (i.e., the model was
created by create_calib_data with NumCameras=0 and NumCalibObjects=0), both CameraIdx and
CalibObjIdx must be set to 0. If the model uses a camera and an calibration object (i.e., NumCameras=1
and NumCalibObjects=1), then find_calib_object or set_calib_data_observ_points must
be used.
The observation pose data can be accessed later by calling get_calib_data_observ_pose using the same
values for the arguments CameraIdx, CalibObjIdx, and CalibObjPoseIdx.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2}

HALCON 24.11.1.0
428 CHAPTER 6 CALIBRATION

. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer


Index of the calibration object.
Default: 0
Suggested values: CalibObjIdx ∈ {0, 1, 2}
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object.
Default: 0
Suggested values: CalibObjPoseIdx ∈ {0, 1, 2}
Restriction: CalibObjPoseIdx >= 0
. ObjInCameraPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the observed calibration object relative to the observing camera.
Number of elements: 7
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CalibDataID

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_marks_and_pose, set_calib_data_cam_param, set_calib_data_calib_object
Possible Successors
set_calib_data, calibrate_cameras
Alternatives
find_calib_object
Module
Calibration

6.5 Inverse Projection

get_line_of_sight ( : : Row, Column, CameraParam : PX, PY, PZ,


QX, QY, QZ )

Compute the line of sight corresponding to a point in the image.


get_line_of_sight computes the line of sight corresponding to a pixel (Row, Column) in the image. The
line of sight is a (straight) line in the camera coordinate system, which is described by two points (PX,PY,PZ)
and (QX,QY,QZ) on the line. The camera is described by the internal camera parameters CameraParam (see
Calibration for details). If a pinhole camera is used, the second point lies on the focal plane, i.e., for frame
cameras, the output parameter QZ is equivalent to the focal length of the camera, whereas for linescan cameras, QZ
also depends on the motion of the camera with respect to the object. The equation of the line of sight is given by
     
X PX QX − PX
 Y  =  PY  + l ·  QY − PY 
Z PZ QZ − PZ

The advantage of representing the line of sight as two points is that it is easier to transform the line in 3D. To do
so, all that is necessary is to apply the operator affine_trans_point_3d to the two points.

HALCON/HDevelop Reference Manual, 2024-11-13


6.6. MONOCULAR 429

Parameters
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Row coordinate of the pixel.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Column coordinate of the pixel.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. PX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinate of the first point on the line of sight in the camera coordinate system
. PY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinate of the first point on the line of sight in the camera coordinate system
. PZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinate of the first point on the line of sight in the camera coordinate system
. QX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinate of the second point on the line of sight in the camera coordinate system
. QY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinate of the second point on the line of sight in the camera coordinate system
. QZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinate of the second point on the line of sight in the camera coordinate system
Example

* Set internal camera parameters.


* Note the, typically, these values are the result of a prior
* calibration.
gen_cam_par_area_scan_division (0.01, 30, 4.65e-006, 4.65e-006, \
640, 480, 1280, 960, CameraParam)
* Inverse projection.
get_line_of_sight([50, 100], [100, 200], CameraParam, PX, PY, PZ, QX, QY, QZ)

Result
get_line_of_sight returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_cam_par, camera_calibration
Possible Successors
affine_trans_point_3d
See also
camera_calibration, disp_caltab, read_cam_par, project_3d_point,
affine_trans_point_3d
Module
Calibration

6.6 Monocular

camera_calibration ( : : NX, NY, NZ, NRow, NCol, StartCamParam,


NStartPose, EstimateParams : CameraParam, NFinalPose, Errors )

Determine all camera parameters by a simultaneous minimization process.

HALCON 24.11.1.0
430 CHAPTER 6 CALIBRATION

camera_calibration performs the calibration of a single camera. For this, known 3D model points (with
coordinates NX, NY, NZ) are projected into the image and the sum of the squared distances between the projected
3D-coordinates and their corresponding image point coordinates (NRow, NCol) is minimized.
As initial values for the minimization process the external (NStartPose) and internal (StartCamParam) cam-
era parameters are used. Thereby NStartPose is an ordered tuple with all initial values for the external camera
parameters given in the form ccs Pwcs , where ccs denotes the camera coordinate system and wcs the world co-
ordinate system (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). Individual
camera parameters can be explicitly included or excluded from the minimization with EstimateParams. For a
detailed description of the available camera models, the different sets of internal camera parameters, and general
requirements for the setup, see Calibration.
For a successful calibration, at least one calibration object with accurately known metric properties is needed, e.g.,
a HALCON calibration plate. Before calling camera_calibration, take a series of images of the calibration
object in different orientations and make sure that the whole field of view or measurement volume is covered. The
success of the calibration highly depends on the quality of the calibration object and the images. So you might
want to exercise special diligence during the acquisition of the calibration images. See the section “How to take a
set of suitable images?” in Calibration for further details.
After a successful calibration, camera_calibration returns the optimized internal (CameraParam) and
external (NFinalPose ccs Pwcs ) camera parameters of the camera. Additionally, the root mean square error
(RMSE) of the back projection of the optimization is returned in Errors (in pixels). This error gives a general
indication whether the optimization was successful.
Preparation of the calibration process

How to extract the calibration marks in the images? If a HALCON calibration plate is used, you can use the
operator find_calib_object to determine the coordinates of the calibration marks in each image and
to compute a rough estimate for the external camera parameters. Using HALCON calibration plates with
rectangularly arranged marks (see gen_caltab), a combination of the two operators find_caltab and
find_marks_and_pose will have the same effect. In both cases, the hereby obtained values can directly
be used as initial values for the external camera parameters (NStartPose).
Obviously, images in which the segmentation of the calibration plate (find_caltab) has failed
or the calibration marks have not been determined successfully by find_marks_and_pose or
find_calib_object should not be used.
How do you get the required initial values for the calibration? If you use a HALCON calibration plate, the in-
put parameters NX, NY, and NZ are stored in the description file of the calibration plate. You can easily
access them by calling the operator caltab_points. Initial values for the internal camera parameters
(StartCamParam) can be obtained from the specifications of the used camera. Further information can
be found in Calibration. Initial values for the poses of the calibration plate and the coordinates of the cal-
ibration marks NRow and NCol can be calculated using the operator find_calib_object. The tuple
NStartPose is set by the concatenation of all these poses.
Which camera parameters are estimated? The input parameter EstimateParams is used to select which
camera parameters to estimate. Usually, this parameter is set to ’all’, i.e., all 6 external camera param-
eters (translation and rotation) and all internal camera parameters are determined. If the internal camera
parameters already have been determined (e.g., by a previous call to camera_calibration), it is often
desired to only determine the pose of the world coordinate system in camera coordinates (i.e., the external
camera parameters). In this case, EstimateParams can be set to ’pose’. This has the same effect as
EstimateParams = [’alpha’,’beta’,’gamma’,’transx’,’transy’,’transz’]. Otherwise, EstimateParams
contains a tuple of strings that indicates the combination of parameters to estimate. In addition, parameters
can be excluded from estimation by using the prefix ~. For example, the values [’pose’,’~transx’] have the
same effect as [’alpha’,’beta’,’gamma’,’transy’,’transz’]. As a different example, [’all’,’~focus’] determines
all internal and external parameters except the focus. The prefix ~ can be used with all parameter values
except ’all’.
Which limitations exist for the determination of the camera parameters? For additional information about
general limitations when determining camera parameters, please see the section “Further Limitations Re-
lated to Specific Camera Types” in the chapter Calibration.
What is the order within the individual parameters? The length of the tuple NStartPose depends on the
number of calibration images, e.g., using 15 images leads to a length of the tuple NStartPose equal to
15 · 7 = 105 (15 times the 7 external camera parameters). The first 7 values correspond to the pose of the
calibration plate in the first image, the next 7 values to the pose in the second image, etc.

HALCON/HDevelop Reference Manual, 2024-11-13


6.6. MONOCULAR 431

This fixed number of calibration images must be considered within the tuples with the coordinates of the 3D
model marks and the extracted 2D marks. If 15 images are used, the length of the tuples NRow and NCol
is 15 times the length of the tuples with the coordinates of the 3D model marks (NX, NY, and NZ). If every
image consists 49 marks, the length of the tuples NRow and NCol is 15 · 49 = 735, while the length of the
tuples NX, NY, and NZ is 49. The order of the values in NRow and NCol is “image after image”, i.e., using
49 marks the first 3D model point corresponds to the 1st, 50th, 99th, 148th, 197th, 246th, etc. extracted 2D
mark.
What is the meaning of the output parameters? If the camera calibration process has finished successfully, the
output parameters CameraParam and NFinalPose contain the adjusted values for the internal and ex-
ternal camera parameters. The length of the tuple NFinalPose corresponds to the length of the tuple
NStartPose.
The representation types of NFinalPose correspond to the representation type of the first tuple of
NStartPose (see create_pose). You can convert the representation type by convert_pose_type.
As an additional parameter, the root mean square error (RMSE) (Errors) of the back projection of the
optimization is returned. This parameter reflects the accuracy of the calibration. The error value (root mean
square error of the position) is measured in pixels. If only a single camera is calibrated, an Error in the order
of 0.1 pixel (the typical detection error by extraction of the coordinates of the projected calibration markers) is
an indication that the optimization fits the observation data well. If Errors strongly differs from 0.1 pixels,
the calibration did not perform well. Reasons for this might be, e.g., a poor image quality, an insufficient
number of calibration images, or an inaccurate calibration plate.
Do I have to use a planar calibration object? No. The operator camera_calibration is designed in a way
that the input tuples NX, NY, NZ, NRow, and NCol can contain any 3D/2D correspondences. The order of the
single parameters is explained in the paragraph “What is the order within the individual parameters?”.
Thus, it makes no difference how the required 3D model marks and the corresponding 2D marks are de-
termined. On the one hand, it is possible to use a 3D calibration object, on the other hand, you also
can use any characteristic points (e.g., natural landmarks) with known position in the world. By setting
EstimateParams to ’pose’, it is thus possible to compute the pose of an object in camera coordinates!
For this, at least three 3D/2D-correspondences are necessary as input. NStartPose can, e.g., be generated
directly as shown in the program example for create_pose.

Attention
The minimization process of the calibration depends on the initial values of the internal (StartCamParam) and
external (NStartPose) camera parameters. The computed average errors Errors give an impression of the
accuracy of the calibration. The errors (deviations in x- and y-coordinates) are measured in pixels.
For line scan cameras, it is possible to set the start value for the internal camera parameter Sy to the value
0.0. In this case, it is not possible to determine the position of the principal point in y-direction. Therefore,
EstimateParams must contain the term ’~cy’. The effective distance of the principle point from the sensor line
is then always pv = Sy · Cy = 0.0. Further information can be found in the section “Further Limitations Related to
Specific Camera Types” of Calibration.
Parameters
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered tuple with all x coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered tuple with all y coordinates of the calibration marks (in meters).
Number of elements: NY == NX
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered tuple with all z coordinates of the calibration marks (in meters).
Number of elements: NZ == NX
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
Number of elements: NCol == NRow
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal camera parameters.

HALCON 24.11.1.0
432 CHAPTER 6 CALIBRATION

. NStartPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer


Ordered tuple with all initial values for the external camera parameters.
Number of elements: NStartPose == 7 * NRow / NX
. EstimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Camera parameters to be estimated.
Default: ’all’
List of values: EstimateParams ∈ {’all’, ’pose’, ’camera’, ’alpha’, ’beta’, ’gamma’, ’transx’, ’transy’,
’transz’, ’focus’, ’magnification’, ’kappa’, ’poly’, ’k1’, ’k2’, ’k3’, ’poly_tan_2’, ’image_plane_dist’, ’tilt’,
’cx’, ’cy’, ’sx’, ’sy’, ’vx’, ’vy’, ’vz’}
. CameraParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. NFinalPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all external camera parameters.
Number of elements: NFinalPose == 7 * NRow / NX
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Average error distance in pixels.
Example

* Read calibration images.


read_image(Image1, 'calib/grid_space.cal.k.000')
read_image(Image2, 'calib/grid_space.cal.k.001')
read_image(Image3, 'calib/grid_space.cal.k.002')
* Find calibration pattern.
find_caltab(Image1, CalPlate1, 'caltab_big.descr', 3, 112, 5)
find_caltab(Image2, CalPlate2, 'caltab_big.descr', 3, 112, 5)
find_caltab(Image3, CalPlate3, 'caltab_big.descr', 3, 112, 5)
* Find calibration marks and start poses.
StartCamPar := ['area_scan_division', 0.008, 0.0, 0.000011, 0.000011, \
384, 288, 768, 576]
find_marks_and_pose(Image1, CalPlate1, 'caltab_big.descr', StartCamPar, \
128, 10, 18, 0.9, 15.0, 100.0, RCoord1, CCoord1, \
StartPose1)
find_marks_and_pose(Image2, CalPlate2, 'caltab_big.descr', StartCamPar, \
128, 10, 18, 0.9, 15.0, 100.0, RCoord2, CCoord2, \
StartPose2)
find_marks_and_pose(Image3, CalPlate3, 'caltab_big.descr', StartCamPar, \
128, 10, 18, 0.9, 15.0, 100.0, RCoord3, CCoord3, \
StartPose3)
* Read 3D positions of calibration marks.
caltab_points('caltab_big.descr', NX, NY, NZ)
* Camera calibration.
camera_calibration(NX, NY, NZ, [RCoord1, RCoord2, RCoord3], \
[CCoord1, CCoord2, CCoord3], StartCamPar, \
[StartPose1, StartPose2, StartPose3], 'all', \
CameraParam, NFinalPose, Errors)
* Write internal camera parameters to file.
write_cam_par(CameraParam, 'campar.dat')

Result
camera_calibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired camera
parameters have been determined by the minimization algorithm. If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 433

Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, pose_to_hom_mat3d, disp_caltab, sim_caltab
Alternatives
calibrate_cameras
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab,
calibrate_cameras
Module
Calibration

6.7 Multi-View

This chapter describes how to calibrate different multi-view camera setups.


In order to achieve high accuracy for your measuring tasks you need to calibrate your camera setup. In comparison
to a single-camera setup, some additional requirements apply to the calibration of a multi-view camera setup. The
following paragraphs provide explanations regarding the calibration of multi-view camera setups. For general
information on camera calibration please refer to the chapter Calibration.
Preparing the Calibration Input Data for Multi-View Camera Setups
Before the actual calibration can be performed, a calibration data model must be prepared (as described in Cali-
bration). For setups with multiple cameras, these additional aspects should be considered:

• The number of cameras in the setup and the number of used calibration objects can be set when calling
create_calib_data.
• When specifying the camera type with set_calib_data_cam_param, note that only cameras of the
same type (i.e., area scan or line scan) can be calibrated in a single setup.
• Configure the calibration process, e.g., specify the reference camera, using set_calib_data. You can
also specify parameters for the complete setup or just configure parameters of individual cameras as well as
calibration object poses in the setup.

Performing the Actual Camera Calibration


The calibration performed by calibrate_cameras depends on the camera types that are involved in the cali-
bration setup. While different camera setups require specific conditions when acquiring images, the basic steps of
the calibration procedure for setups including projective and/or telecentric cameras are similar:

1. Building a chain of observation poses: In the first step, the operator calibrate_cameras tries to build
a valid chain of observation poses, that connects all cameras and calibration object poses to the reference
camera. Depending on the setup, the conditions for a valid chain of poses differ. For specific information
see the respective paragraphs below.
If there is a camera that cannot be reached (i.e., it is not observing any calibration object pose that can
be connected in the chain), the calibration process is terminated with an error. Otherwise, the algorithm
initializes all calibration items’ poses by going down this chain.
2. First optimization: In this step, calibrate_cameras performs the actual optimization for all optimiza-
tion parameters that were not explicitly excluded from the calibration.
3. Second optimization: Based on the so-far calibrated cameras, the algorithm corrects all observations that
contain mark contour information (see find_calib_object). Then, the calibration setup is optimized
anew for the corrections to take effect. If no contour information was available, this step is skipped.

HALCON 24.11.1.0
434 CHAPTER 6 CALIBRATION

4. Compute quality of parameter estimation: In the last step, calibrate_cameras computes the stan-
dard deviations and the covariances of the calibrated internal camera parameters.

The following paragraphs give further information about the conditions specific to the camera setups.

Projective area scan cameras For a setup with projective area scan cameras, the calibration is performed in the
four steps listed above. The algorithm tries to build a chain of observation poses that connects all cameras
and calibration object poses to the reference camera like in the diagram below.

(1) (2)
(1) All cameras can be connected by a chain of observation poses. (2) The leftmost camera is isolated,
because the left calibration plate cannot be seen by any other camera.

Possible projective area scan cameras are:

• ’area_scan_division’
• ’area_scan_polynomial’
• ’area_scan_tilt_division’
• ’area_scan_tilt_polynomial’
• ’area_scan_tilt_image_side_telecentric_division’
• ’area_scan_tilt_image_side_telecentric_polynomial’
• ’area_scan_hypercentric_division’
• ’area_scan_hypercentric_polynomial’

Telecentric area scan cameras For a setup with telecentric area scan cameras, similar to projective area scan
cameras, the same four steps that are listed above are executed. In the first step (building a chain of observa-
tion poses that connects all cameras and calibration objects), additional conditions must hold. Since the pose
of an object can only be determined up to a translation along the optical axis, each calibration object must be
observed by at least two cameras to determine its relative location. Otherwise, its pose is excluded from the
calibration. Also, since a planar calibration object appears the same from two different observation angles,
the relative pose of the cameras among each other cannot be determined unambiguously. Therefore, there
are always two valid alternative relative poses. Both alternatives result in a consistent camera setup which
can be used for measuring. Since the ambiguity cannot be resolved, the first of the alternatives is returned.
Note that, if the returned pose is not the real pose but the alternative one, then this will result in a mirrored
reconstruction.
Possible telecentric area scan cameras are:

• ’area_scan_telecentric_division’
• ’area_scan_telecentric_polynomial’
• ’area_scan_tilt_bilateral_telecentric_division’
• ’area_scan_tilt_bilateral_telecentric_polynomial’
• ’area_scan_tilt_object_side_telecentric_division’
• ’area_scan_tilt_object_side_telecentric_polynomial’

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 435

Projective and telecentric area scan cameras For a mixed setup with projective and telecentric area scan cam-
eras, the algorithm performs the same four steps as enumerated above. Possible ambiguities during the first
step (building a chain of observation poses that connects all cameras and calibration objects), as described
above for the setup with telecentric cameras, can be resolved as long as there exists a chain of observation
poses consisting of all perspective cameras and a sufficient number of calibration objects. Here, sufficient
number means that each telecentric camera observes at least two calibration objects of this chain.

(P) (P) (P) (P)


(T) (T)
(1) (2)
Mixed calibration setup with perspective (P) and telecentric (T) area scan cameras. (1) All perspective
cameras are connected by a chain of observation poses that only contains perspective cameras. (2) The
second calibration plate (from the left) is not observed by the rightmost perspective camera. Therefore, the
relative pose between both perspective cameras cannot be determined uniquely.

Line scan cameras Setups with telecentric line scan cameras (’line_scan_telecentric’) behave identically to se-
tups with telecentric area scan cameras and the same restrictions and ambiguities that are described above
apply. For this type of setup, two possible configurations can be distinguished. In the first configuration,
all cameras are mounted rigidly and stationary and the object is moved linearly in front of the cameras.
Alternatively, all cameras are mounted rigidly with respect to each other and are moved across the object by
the same linear actuator. In both cases, all cameras share a common motion vector, which is modeled in the
camera coordinate system of the reference camera and is transformed to the camera coordinate systems of all
other cameras by the rotation part of the respective camera’s pose. This configuration is assumed by default.
In the second configuration, the cameras are moved by independent linear actuators in different directions.
In this case, each camera has its own independent motion vector. The type of configuration can be selected
with set_calib_data.

(1) (2) (3)


Different configurations of telecentric line scan camera setups can be distinguished: (1) Only one motion
vector needs to be computed if the cameras are mounted stationary while the object is moved linearly, (2) or
if the cameras are moved across the object while mounted rigidly to each other. (3) Alternatively, if the
cameras are moved independently from each other, a motion vector is determined for each camera.

Note that two different stereo setups are common for telecentric line scan cameras. For both setups, a linear,
constant motion is assumed for the observed object or the camera system respectively.

• For along-track setups one camera is placed in front, looking in backwards direction, while the second
camera is mounted behind, looking forwards, both at an suitable angle in respect to the motion vector.
• The cameras in an across-track setup are all directed perpendicular to the motion vector, while the
viewing planes are approximately coplanar. Therefore, the depth of field is rather limited. Precise
measurements are only possible in areas where the depth of field of the individual cameras overlap.

HALCON 24.11.1.0
436 CHAPTER 6 CALIBRATION

(1) (2)
Stereo setups for telecentric line scan cameras: (1) Along-track setup and (2) Across-track setup.

For setups with projective line scan cameras (’line_scan’), the following restriction exists: only one camera
can be calibrated and only one calibration object per setup can be used.

Finally, for calibration plates with rectangularly arranged marks (see gen_caltab) all observations must contain
the projection coordinates of all calibration marks of the calibration object. For calibration plates with hexagonally
arranged marks (see create_caltab) this restriction is not applied. You can find further information about cal-
ibration plates and the acquisition of calibration images in the section “Additional information about the calibration
process” within the chapter Calibration.
Checking the Success of the Calibration
If more than one camera is calibrated simultaneously, the value of Error is more difficult to judge. As a rule
of thumb, Error should be as small as possible and at least smaller than 1.0, thus indicating that a subpixel
precise evaluation of the data is possible with the calibrated parameters. This value might be difficult to reach in
particular configurations. For further analysis of the quality of the calibration, refer to the standard deviations and
covariances of the estimated parameters.
Getting the Calibration Results
The results of the calibration, i.e., internal camera parameters, camera poses (external camera parameters), calibra-
tion objects poses etc., can be queried with get_calib_data.
Note that the poses of telecentric cameras can only be determined up to a displacement along the z-axis of the
coordinate system of the respective camera (perpendicular to the image plane). Therefore, all camera poses are
moved along this axis until they all lie on a common sphere. The center of the sphere is defined by the pose of the
first calibration object. The radius of the sphere depends on the calibration setup. If projective and telecentric area
scan cameras are calibrated, the radius is the maximum over all distances from the perspective cameras to the first
calibration object. Otherwise, if only telecentric area scan cameras are considered, the radius is equal to 1 m.
Further Information
Learn about the calibration of multi-camera setups and many other topics in interactive online courses at our
MVTec Academy .

calibrate_cameras ( : : CalibDataID : Error )

Determine all camera parameters by a simultaneous minimization process.


The operator calibrate_cameras calculates the internal and external camera parameters of a calibration data
model specified in CalibDataID. The calibration data model describes a setup of one or more cameras and is
specified during the creation of the data model. You can find detailed information about the calibration process in
the chapter reference Calibration.
The root mean square error (RMSE) of the back projection of the optimization is returned in Error (in pixels).
The error gives a general indication whether the optimization was successful. You can find more details about the
RMSE in the chapter reference mentioned above.

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 437

Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Back projection root mean square error (RMSE) of the optimization.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_calib_data, set_calib_data_cam_param, set_calib_data_calib_object,
set_calib_data_observ_points, find_calib_object, set_calib_data,
remove_calib_data_observ
Possible Successors
get_calib_data
References
Carsten Steger: “A Comprehensive and Versatile Camera Model for Cameras with Tilt Lenses”; International
Journal of Computer Vision, vol. 123, no. 2, pp. 121-159, 2017.
Carsten Steger, Markus Ulrich, Christian Wiedemann: “Machine Vision Algorithms and Applications”; Wiley-
VCH, Weinheim, 2nd Edition, 2018.
Markus Ulrich, Carsten Steger: “A Camera Model for Cameras with Hypercentric Lenses and Some Example
Applications”; Machine Vision and Applications, vol. 30, no. 6, pp. 1013-1028, 2019.
Carsten Steger, Markus Ulrich: “A Camera Model for Line-Scan Cameras with Telecentric Lenses”; International
Journal of Computer Vision, vol. 129, no. 1, pp. 80-99, 2021.
Carsten Steger, Markus Ulrich: “A Multi-view Camera Model for Line-Scan Cameras with Telecentric Lenses”;
Journal of Mathematical Imaging and Vision, vol. 64, no. 2, pp. 105-130, 2022.
Module
Calibration

clear_calib_data ( : : CalibDataID : )

Free the memory of a calibration data model.


The operator clear_calib_data frees the memory of the calibration data model CalibDataID. After call-
ing clear_calib_data, the model can no longer be used. The handle CalibDataID becomes invalid.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
438 CHAPTER 6 CALIBRATION

This operator modifies the state of the following input parameter:


• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Module
Calibration

clear_camera_setup_model ( : : CameraSetupModelID : )

Free the memory of a calibration setup model.


The operator clear_camera_setup_model frees the memory of a camera setup model that was created
by create_camera_setup_model or read_camera_setup_model or was returned as a result by
get_calib_data. After calling clear_camera_setup_model, the model can no longer be used. The
handle CameraSetupModelID becomes invalid.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle of the camera setup model.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CameraSetupModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Module
Calibration

create_calib_data ( : : CalibSetup, NumCameras,


NumCalibObjects : CalibDataID )

Create a HALCON calibration data model.


The operator create_calib_data creates a generic calibration data model that stores

• the description of a camera calibration setup,


• settings for the calibration process,
• the calibration data, and
• the results of the camera calibration or the hand-eye calibration.

In the parameter CalibSetup, you specify the calibration setup type. Currently, five types are supported. A
model of the type ’calibration_object’ is used to calibrate the internal camera parameters and the camera poses of
one or more cameras based on the metric information extracted from observations of calibration objects.
A model of type ’hand_eye_moving_cam’, ’hand_eye_stationary_cam’, ’hand_eye_scara_moving_cam’, or
’hand_eye_scara_stationary_cam’ is used to perform a hand-eye calibration based on observations of a calibration
object and corresponding poses of a robot tool in the robot base coordinate system. The latter four model types
on the one hand distinguish whether the camera or the calibration object is moved by the robot and on the other
hand distinguish whether an articulated robot or a SCARA robot is calibrated. The arm of an articulated robot has

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 439

three rotary joints typically covering 6 degrees of freedom (3 translations and 3 rotations). SCARA robots have
two parallel rotary joints and one parallel prismatic joint covering only 4 degrees of freedom (3 translations and 1
rotation). Loosely speaking, an articulated robot is able to tilt its end effector while a SCARA robot is not.
NumCameras specifies the number of cameras that are calibrated simultaneously in the setup.
NumCalibObjects specifies the number of calibration objects observed by the cameras. Please note that for
camera calibrations with line scan cameras with perspective lenses only a single calibration object is allowed
(NumCalibObjects=1). For hand-eye calibrations, only two setups are currently supported: either one area
scan projective camera and one calibration object (NumCameras=1, NumCalibObjects=1) or a general sensor
with no calibration object (NumCameras=0, NumCalibObjects=0). Attention: The four hand-eye calibration
models do not support telecentric cameras.
CalibDataID returns a handle of the new calibration data model. You pass this handle to other operators to col-
lect the description of the camera setup, the calibration settings, and the calibration data. For camera calibrations,
you pass it to calibrate_cameras, which performs the actual camera calibration and stores the calibration
results in the calibration data model. For a detailed description of the preparation process, please refer to the chap-
ter Calibration. For hand-eye calibrations, you pass it to calibrate_hand_eye, which performs the actual
hand-eye calibration and stores the calibration results in the calibration data model. For a detailed description of
the preparation process, please refer to the operator calibrate_hand_eye.
Parameters
. CalibSetup (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the calibration setup.
Default: ’calibration_object’
List of values: CalibSetup ∈ {’calibration_object’, ’hand_eye_moving_cam’,
’hand_eye_stationary_cam’, ’hand_eye_scara_moving_cam’, ’hand_eye_scara_stationary_cam’}
. NumCameras (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of cameras in the calibration setup.
Default: 1
Restriction: NumCameras >= 0
. NumCalibObjects (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of calibration objects.
Default: 1
Restriction: NumCalibObjects >= 0
. CalibDataID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of the created calibration data model.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
set_calib_data_cam_param, set_calib_data_calib_object
Module
Calibration

create_camera_setup_model ( : : NumCameras : CameraSetupModelID )

Create a model for a setup of calibrated cameras.


The operator create_camera_setup_model creates a new camera setup model and returns a handle to it
in CameraSetupModelID. The camera setup comprises a fixed number of cameras, which is specified by
NumCameras and cannot be changed once the model was created. For each camera, the setup stores its internal
parameters, covariances of the internal parameters (optional) and a pose of the camera.

HALCON 24.11.1.0
440 CHAPTER 6 CALIBRATION

Using set_camera_setup_param, you can change the coordinate system in which the cameras are repre-
sented: You can either select a camera and convert all camera poses to be relative to this camera or you can apply
a general coordinate transformation, which moves the setup’s coordinate system into an arbitrary pose. Changing
the coordinate system of the camera setup is particularly useful in cases, where, e.g., you want to represent the
cameras in the coordinate system of an object being observed by the cameras. This concept is further demonstrated
in the example below.
The internal parameters and pose of a camera are set or modified by set_camera_setup_cam_param. Fur-
ther camera parameters and general setup parameters can be set by set_camera_setup_param as well. All
parameters can be read back by get_camera_setup_param.
A camera setup model can be saved into a file by write_camera_setup_model and read back by
read_camera_setup_model.
Parameters
. NumCameras (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of cameras in the setup.
Default: 2
Suggested values: NumCameras ∈ {1, 2, 3, 4}
Restriction: NumCameras >= 1
. CameraSetupModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
Example

* Create camera setup of three cameras.


create_camera_setup_model (3, CameraSetupModelID)
gen_cam_par_area_scan_division (0.006, 0, 8.3e-6, 8.3e-6,\
512, 384, 1024, 768, StartCamPar)
* Camera 0 is located in the origin.
set_camera_setup_cam_param (CameraSetupModelID, 0, [], StartCamPar,\
[0, 0, 0, 0, 0, 0, 0])

* Camera 1 is shifted 0.07 m in positive x-direction relative to


* camera 0.
set_camera_setup_cam_param (CameraSetupModelID, 1, [], StartCamPar,\
[0.07, 0, 0, 0, 0, 0, 0])

* Camera 2 is shifted 0.1 m in negative y-direction relative to


* camera 0.
set_camera_setup_cam_param (CameraSetupModelID, 2, [], StartCamPar,\
[0.0, -0.1, 0, 0, 0, 0, 0])

* There is an object, which is 0.5 away from the origin in


* z-direction, and is facing the origin.
ObjectPose := [0, 0, 0.5, 180, 0, 0, 0]
* Place the setup's origin in the object.
set_camera_setup_param (CameraSetupModelID, 'general', 'coord_transf_pose',\
ObjectPose)

* Now the camera poses are given relative to the object.


get_camera_setup_param (CameraSetupModelID, 0, 'pose', CamPose0)
* CamPose0 is equivalent to [0.0, 0.0, 0.5, 180.0, 0.0, 0.0, 0]

get_camera_setup_param (CameraSetupModelID, 1, 'pose', CamPose1)


* CamPose1 is equivalent to [0.07, 0.0, 0.5, 180.0, 0.0, 0.0, 0]

get_camera_setup_param (CameraSetupModelID, 2, 'pose', CamPose2)


* CamPose2 is equivalent to [0.0, 0.1, 0.5, 180.0, 0.0, 0.0, 0]

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 441

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
set_camera_setup_param
Module
Calibration

deserialize_calib_data ( : : SerializedItemHandle : CalibDataID )

Deserialize a serialized calibration data model.


deserialize_calib_data deserializes a calibration data model, that was serialized by
serialize_calib_data (see fwrite_serialized_item for an introduction of the basic principle of
serialization). The serialized calibration data model is defined by the handle SerializedItemHandle. The
deserialized values are stored in an automatically created calibration data model with the handle CalibDataID.
Note that serialize_calib_data does not serialize any calibration results. Yet, calibrate_cameras
can be called for a fully configured calibration model immediately after the deserialization. All calibration results
are accessible afterwards.
Parameters

. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle


Handle of the serialized item.
. CalibDataID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
Result
If the parameters are valid, the operator deserialize_calib_data returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_calib_data
Module
Calibration

deserialize_camera_setup_model (
: : SerializedItemHandle : CameraSetupModelID )

Deserialize a serialized camera setup model.


deserialize_camera_setup_model deserializes a camera setup model, that was serialized by
serialize_camera_setup_model (see fwrite_serialized_item for an introduction of the
basic principle of serialization). The serialized camera setup model is defined by the handle

HALCON 24.11.1.0
442 CHAPTER 6 CALIBRATION

SerializedItemHandle. The deserialized values are stored in an automatically created camera setup model
with the handle CameraSetupModelID.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. CameraSetupModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
Result
If the parameters are valid, the operator deserialize_camera_setup_model returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_camera_setup_model
Module
Calibration

get_calib_data ( : : CalibDataID, ItemType, ItemIdx,


DataName : DataValue )

Query data stored or computed in a calibration data model.


With the operator get_calib_data, you can query data of the calibration data model CalibDataID.
Note that in the following, all ’pose’-related data is given in relation to the coordinate system of the model’s
reference camera, which can be set with set_calib_data and queried with get_calib_data. By default,
the first camera (camera index 0) is used as reference camera.
The calibration data model contains various kinds of data. How to query specific data of the calibration data model
is described for different categories of data:

• Model-related data (ItemType ’model’)


• Camera-related data (ItemType ’camera’)
• Data related to calibration objects (ItemType ’calib_obj’)
• Data related to calibration object poses (ItemType ’calib_obj_pose’)
• Hand-eye calibration related data (different values for ItemType)

Before we describe the individual data you can query in detail, we provide you with an overview on which data
is available after the individual steps of the calibration processes. When calibrating cameras or a hand-eye sys-
tem, several operators are called that step by step fill the calibration data model with content. In the following,
for each operator a table lists the data that is added to the model. Additionally, you find information about the
combinations of the values for ItemType, ItemIdx, and DataName that are needed to query the information
with get_calib_data. For the different indices that are used within the tables the following abbreviations (or
potential variable names) are used:

• Camera index: CameraIdx


• Calibration object index: CalibObjIdx
• Calibration object pose index: CalibObjPoseIdx

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 443

Detailed descriptions of the data that can be queried can then be found in the specific sections that handle the
different categories of data individually.
To get detailed information about the calibration process of your camera setup see the chapter Calibration.
Content of Calibration Data Model When Calibrating Cameras
For each operator that extends the calibration model, a table is provided to give an overview on the respective data:

• create_calib_data:

Data added to the model ItemType ItemIdx DataName


Type of the calibration ’model’ ’general’ ’type’
data model
Number of cameras ’model’ ’general’ ’num_cameras’
Number of calibration ’model’ ’general’ ’num_calib_objs’
objects
• set_calib_data_cam_param:

Data added to the model ItemType ItemIdx DataName


Camera types ’camera’ CameraIdx ’type’
Initial internal camera ’camera’ CameraIdx ’init_params’
parameters
• set_calib_data_calib_object:

Data added to the model ItemType ItemIdx DataName


Numbers of calibra- ’calib_obj’ CalibObjIdx ’num_marks’
tion marks of the cali-
bration objects
Coordinates of the cal- ’calib_obj’ CalibObjIdx ’x’, ’y’, ’z’
ibration marks of the
calibration objects rela-
tive to their calibration
object coordinate sys-
tems

For standard HALCON calibration plates, further calibration plate specific information is added to the model,
which is not accessible with get_calib_data but can be obtained directly from the corresponding calibration
plate description files instead (for details about the description files see create_caltab for a calibration plate
with hexagonally arranged marks and gen_caltab for a calibration plate with rectangularly arranged marks).

• find_calib_object (for standard HALCON calibration plates):

Data added to the model accessible with


Observed image coordinates of the cali- get_calib_data_observ_points
bration marks
Observed contours of the calibration get_calib_data_observ_contours
marks
Observed poses of the calibration plate rel- get_calib_data_observ_pose or
ative to the camera coordinate system get_calib_data_observ_points
• set_calib_data_observ_points (for other calibration objects than the HALCON calibration
plates):

Data added to the model accessible with


Observed image coordinates of the cali- get_calib_data_observ_points
bration marks

HALCON 24.11.1.0
444 CHAPTER 6 CALIBRATION

• set_calib_data:

Data added to the model ItemType ItemIdx DataName


Reference camera ’model’ ’general’ ’reference_camera’
Internal and external ’camera’ ’general’ or ’calib_settings’
camera parameters to CameraIdx
calibrate
Internal and external ’camera’ ’general’ or ’excluded_settings’
camera parameters to CameraIdx
be excluded from the
calibration
For stereo setups with ’model’ ’general’ ’common_motion_vector’
telecentric line scan
cameras: Do the cam-
eras have a common
motion vector?
Calibration object ’calib_obj_pose’ ’general’ or ’calib_settings’
pose settings to be [CalibObjIdx,
optimized CalibObjPoseIdx]
Calibration object ’calib_obj_pose’ ’general’ or ’excluded_settings’
pose settings to be [CalibObjIdx,
excluded from the CalibObjPoseIdx]
calibration

• calibrate_cameras:

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 445

Data added to the model ItemType ItemIdx DataName


Camera setup model ’model’ ’general’ ’camera_setup_model’
(needed for multi-view
stereo reconstruction)
Optimized internal ’camera’ CameraIdx ’params’
camera parameters
Standard deviations of ’camera’ CameraIdx ’params_deviations’
the optimized internal
camera parameters
Covariance matrices ’camera’ CameraIdx ’params_covariances’
of the optimized inter-
nal camera parameters
Labels for the internal ’camera’ CameraIdx ’params_labels’
camera parameters
Initial external cam- ’camera’ CameraIdx ’init_pose’
era parameters (cam-
era poses)
Optimized external ’camera’ CameraIdx ’pose’
camera parameters
(camera poses)
Labels for the exter- ’camera’ CameraIdx ’pose_labels’
nal camera parame-
ters (camera poses)
Initial calibration ob- ’calib_obj_pose’ [CalibObjIdx, ’init_pose’
ject poses CalibObjPoseIdx]
Optimized calibration ’calib_obj_pose’ [CalibObjIdx, ’pose’
object poses CalibObjPoseIdx]
Labels for the calibra- ’calib_obj_pose’ [CalibObjIdx, ’pose_labels’
tion object pose pa- CalibObjPoseIdx]
rameters

Content of Calibration Data Model When Performing Hand-Eye Calibration


For each operator that extends the calibration model when performing hand-eye calibration, a table is provided to
give an overview on the respective data:

• create_calib_data:
See the section ’Content of Calibration Data Model When Calibrating Cameras’.
• set_calib_data:

Data added to the model ItemType ItemIdx DataName


Optimization method ’model’ ’general’ ’optimization_method’
Poses of the robot tool ’tool’ CalibObjPoseIdx ’tool_in_base_pose’
in robot base coordi-
nates

• set_calib_data_observ_pose (observations obtained by 3D sensors):

Data added to the model accessible with


Observed calibration object poses get_calib_data_observ_pose

• set_calib_data_cam_param, set_calib_data_calib_object, and


find_calib_object or set_calib_data_observ_points (observations obtained by cam-
eras):
See the section ’Content of Calibration Data Model When Calibrating Cameras’.

HALCON 24.11.1.0
446 CHAPTER 6 CALIBRATION

• calibrate_hand_eye:
Moving camera scenario:

Data added to the model ItemType ItemIdx DataName


Pose of robot tool in ’camera’ 0 ’tool_in_cam_pose’
camera coordinate sys-
tem
Pose of calibration ob- ’calib_obj’ 0 ’obj_in_base_pose’
ject in robot base coor-
dinate system
Standard deviations of ’camera’ 0 ’tool_in_cam_pose_deviations’
the Pose of the robot
tool in camera coordi-
nate system
Covariance matrices ’camera’ 0 ’tool_in_cam_pose_covariances’
of the Pose of the
robot tool in camera
coordinate system
Standard deviations of ’calib_obj’ 0 ’obj_in_base_pose_deviations’
the Pose of the calibra-
tion object in robot base
coordinate system
Covariance matrices ’calib_obj’ 0 ’obj_in_base_pose_covariances’
of the Pose of the cali-
bration object in robot
base coordinate system

Stationary camera scenario:

Data added to the model ItemType ItemIdx DataName


Pose of robot base in ’camera’ 0 ’base_in_cam_pose’
camera coordinate sys-
tem
Pose of calibration ob- ’calib_obj’ 0 ’obj_in_tool_pose’
ject in robot tool coor-
dinate system
Standard deviations of ’camera’ 0 ’base_in_cam_pose_deviations’
the Pose of the robot
base in camera coordi-
nate system
Covariance matrices ’camera’ 0 ’base_in_cam_pose_covariances’
of the Pose of the
robot base in camera
coordinate system
Standard deviations of ’calib_obj’ 0 ’obj_in_tool_pose_deviations’
the Pose of the calibra-
tion object in robot tool
coordinate system
Covariance matrices ’calib_obj’ 0 ’obj_in_tool_covariances’
of the Pose of the cali-
bration object in robot
tool coordinate system

Both hand-eye scenarios:

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 447

Data added to the model ItemType ItemIdx DataName


Calibrated poses of the ’calib_obj_pose’ [0, CalibObjPoseIdx] ’pose’
calibration object in
camera coordinate sys-
tem (not available for
SCARA robots)
Root mean square er- ’model’ ’general’ ’camera_calib_error’
ror (RMSE) of the back
projection after the opti-
mization of the camera
system
Pose error of the com- ’model’ ’general’ ’hand_eye_calib_error’
plete chain of transfor-
mations
Camera setup model ’model’ ’general’ ’camera_setup_model’
(needed for multi-view
stereo reconstruction)

Both hand-eye scenarios, if ’optimization_method’ is set to ’stochastic’:

Data added to the model ItemType ItemIdx DataName


Root mean square ’model’ ’general’ ’camera_calib_error_corrected_tool’
error (RMSE) of
the back projection
into camera images,
via pose chain using
corrected tool poses
Pose error of the com- ’model’ ’general’ ’hand_eye_calib_error_corrected_tool’
plete chain of trans-
formations using cor-
rected tool poses
Standard deviations of ’tool’ ’general’ ’tool_translation_deviation’,
the input poses of the ’tool_rotation_deviation’
robot tool in robot base
coordinates
Corrected poses of the ’tool’ CalibObjPoseIdx ’tool_in_base_pose_corrected’
robot tool in robot
base coordinates

The following sections describe the parameters for the specific categories of data in more detail.

Model-Related Data

ItemType=’model’: ItemIdx must be set to ’general’.


Depending on the selection in DataName, the following model-related data is then returned in DataValue:
’type’: Type of the calibration data model. Currently, the five types ’calibration_object’,
’hand_eye_stationary_cam’, ’hand_eye_moving_cam’, ’hand_eye_scara_stationary_cam’, and
’hand_eye_scara_moving_cam’ are supported.
’reference_camera’: Index of the reference camera for the calibration model. All poses stored in the calibra-
tion data model are specified in the coordinate system of this reference camera.
’num_cameras’: Number of cameras in the calibration data model (see create_calib_data).
’num_calib_objs’: Number of calibration objects in the calibration data model (see
create_calib_data).
’common_motion_vector’: For stereo setups with telecentric line scan cameras, a string with a Boolean value
(i.e., ’true’ or ’false’) that determines whether the cameras have a common motion vector.

HALCON 24.11.1.0
448 CHAPTER 6 CALIBRATION

’camera_setup_model’: A handle to a camera setup model containing the poses and the internal parameters
for the calibrated cameras from the current calibration setup.
’camera_calib_error’: The root mean square error (RMSE) of the back projection of the optimization of the
camera system. Typically, this error is queried after a hand-eye calibration (calibrate_hand_eye)
was performed, where internally the camera system is calibrated without returning the error of the cam-
era calibration. The returned error is identical to the error returned by calibrate_cameras, except
for ’optimization_method’ set to ’stochastic’, which refines hand-eye poses and camera parameters si-
multaneously for articulated robots.
’hand_eye_calib_error’: After a successful hand-eye calibration, the pose error of the complete chain of
transformations is returned. To be more precise, a tuple with four elements is returned, where the first
element is the root-mean-square error of the translational part, the second element is the root-mean-
square error of the rotational part, the third element is the maximum translational error and the fourth
element is the maximum rotational error. The returned errors are identical to the errors returned by
calibrate_hand_eye.
’optimization_method’: Optimization method that was set for the hand-eye calibration (see
set_calib_data).
’camera_calib_error_corrected_tool’: The root mean square error (RMSE) of the back projection of the
calibration mark centers into camera images, via the pose chain using corrected tool poses. By con-
trast, ’camera_calib_error’ uses the direct back projection of ’calib_obj_pose’. This parameter is only
available if ’optimization_method’ is set to ’stochastic’.
’hand_eye_calib_error_corrected_tool’: After a successful hand_eye calibration, the pose error of
the complete chain of transformations using corrected tool poses is returned. By contrast,
’hand_eye_calib_error’ uses the input tool poses. This parameter is only available if ’optimiza-
tion_method’ is set to ’stochastic’.
The parameters ’reference_camera’, ’common_motion_vector’, and ’optimization_method’ can be set with
set_calib_data. The other parameters are set during the model creation or are a result of the calibration
process and cannot be modified.

Camera-Related Data

ItemType=’camera’: ItemIdx determines, if data is queried for all cameras in general or for a specific camera.
With ItemIdx=’general’, the default value of a parameter for all cameras is returned. In contrast, if you
pass a valid camera index instead, i.e., a number between 0 and NumCameras-1 (NumCameras is specified
during model creation with create_calib_data), only the parameter value of the specified camera is
returned.
By selecting the following parameters in DataName, you can query which camera parameters are (or have
been) optimized during the calibration performed by calibrate_cameras:
’calib_settings’: List of the camera parameters that are marked for calibration.
’excluded_settings’: List of camera parameters that are excluded from the calibration.
These parameters can be modified by a corresponding call to set_calib_data.
The following parameters can only be queried for a specific camera, i.e., you must pass a valid camera index
in ItemIdx:
’type’: The camera type that was set with set_calib_data_cam_param.
’init_params’: Initial internal camera parameters (set with set_calib_data_cam_param).
’params’: Optimized internal camera parameters.
’params_deviations’: Standard deviations of the optimized camera parameters, as estimated at the end of the
camera calibration. Note that if the tuple returned for ’params’ contains n elements, the tuple returned
for ’params_deviations’ contains (n − 1) elements since the camera parameter tuple contains the camera
type in the first element of the tuple, whereas the tuple returned for ’params_deviations’ does not contain
the camera type.
’params_covariances’: Covariance matrix of the optimized camera parameters, as estimated at the end of the
camera calibration. Note that if the tuple returned for ’params’ contains n elements, the tuple returned
for ’params_covariances’ contains (n − 1) × (n − 1) elements since the camera parameter tuple contains
the camera type in the first element of the tuple, whereas the tuple returned for ’params_covariances’
does not contain the camera type.

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 449

’params_labels’: A convenience list of labels for the entries returned by ’params’. This list is camera-type
specific. Note that this list contains the label ’camera_type’ in its first position. If the first element of
the tuple is removed, the list refers to the labels of ’params_deviations’ and the labels of the rows and
columns of ’params_covariances’.
’init_pose’: Initial camera pose, relative to the current reference camera. It is computed internally based on
observation poses during the calibration process (see Calibration).
’pose’: Optimized camera pose, relative to the current reference camera. If one single telecentric camera is
calibrated, the translation along the z-axis is set to the value 0.0. If more than one telecentric camera is
calibrated, the camera poses are moved in direction of their z-axis until they all lie on a sphere centered
at the first observed calibration plate. The radius of the sphere corresponds to the longest distance of a
camera to the first observed calibration plate. If this calculated distance is smaller than 1 m, the radius is
set to 1 m.
’pose_labels’: A convenience list of labels for the entries returned by ’pose’.
The calibrated camera parameters (’params’ and ’pose’) can be queried only after a successful execution
of calibrate_cameras. The initial internal camera parameters ’init_params’ can be queried after a
successful call to set_calib_data_cam_param.

Data Related to Calibration Objects

ItemType=’calib_obj’: ItemIdx must be set to a valid calibration object index (number between 0
and NumCalibObjects-1). NumCalibObjects is specified during the model creation with
create_calib_data.
The following parameters can be queried with DataName and are returned in DataValue:
’num_marks’: Number of calibration marks of the calibration object.
’x’, ’y’, ’z’: Coordinates of the calibration marks relative to the calibration object coordinate system.
These parameters can be modified with set_calib_data_calib_object.

Data Related to Calibration Object Poses

ItemType=’calib_obj_pose’: ItemIdx determines, if data is queried for all calibration object poses in general
or for a specific calibration object pose. With ItemIdx=’general’, the default value of a parameter for all
calibration object poses is returned. In contrast, if you pass a valid calibration object index instead, i.e., a
tuple containing a valid index pair [CalibObjIdx, CalibObjPoseIdx], only the parameter value of
the specified calibration object pose is returned.
By selecting the following parameters in DataName, you can query which calibration object pose parameters
are (or have been) optimized during the calibration performed by calibrate_cameras:
’calib_settings’: List of calibration object pose parameters marked for calibration.
’excluded_settings’: List of calibration object pose parameters excluded from calibration.
These parameters can be set with set_calib_data.
The following parameters can only be queried for a specific calibration object pose, i.e., you must pass a valid
index pair [CalibObjIdx, CalibObjPoseIdx] in ItemIdx:
’init_pose’: Initial calibration object pose. It is computed internally based on observation poses during the
calibration process (see Calibration). This pose is relative to the current reference camera.
’pose’: Optimized calibration object pose, relative to current reference camera.
’pose_labels’: A convenience list of labels for the entries returned by ’pose’.
These parameters cannot be explicitly modified and can only be queried after calibrate_cameras was
executed.

Hand-Eye Calibration Related Data

ItemType=’tool’: The following parameters can be queried with DataName and are returned in DataValue:
’tool_in_base_pose’: Pose of the robot tool in robot base coordinates with Index ItemIdx. These poses
were previously set using set_calib_data and served as input for the hand-eye calibration algo-
rithm.

HALCON 24.11.1.0
450 CHAPTER 6 CALIBRATION

’tool_in_base_pose_corrected’: Corrected pose of the robot tool in robot base coordinates of the input
’tool_in_base_pose’ with Index ItemIdx. This parameter is only available if ’optimization_method’ is
set to ’stochastic’ and after calibrate_hand_eye was executed.
’tool_translation_deviation’, ’tool_rotation_deviation’: Standard deviations of the input poses of the robot
tool in robot base coordinates. ItemIdx has to be set to ’general’. This parameter is only available if
’optimization_method’ is set to ’stochastic’ and after calibrate_hand_eye was executed.
After performing a successful hand-eye calibration using calibrate_hand_eye, the following poses can be
queried for a calibration data model of type:
’hand_eye_moving_cam’, ’hand_eye_scara_moving_cam’: For ItemType=’camera’ and
DataName=’tool_in_cam_pose’, the pose of the robot tool in the camera coordinate system is re-
turned in DataValue. For ItemType=’calib_obj’ and DataName=’obj_in_base_pose’, the pose of the
calibration object in the robot base coordinate system is returned in DataValue.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of
’obj_in_base_pose’. To eliminate this ambiguity the Z translation ’obj_in_base_pose’ is internally set to
0.0 and the ’tool_in_cam_pose’ is calculated accordingly. It is necessary to determine the true translation in
Z after the calibration (see calibrate_hand_eye).
The standard deviations and the covariance matrices of the 6 pose parameters of both poses can
be queried with ’tool_in_cam_pose_deviations’, ’tool_in_cam_pose_covariances’ (ItemType=’camera’),
’obj_in_base_pose_deviations’, and ’obj_in_base_pose_covariances’ (ItemType=’calib_obj’). Like
poses, they are specified in the units [m] and [°].
’hand_eye_stationary_cam’, ’hand_eye_scara_stationary_cam’: For ItemType=’camera’ and
DataName=’base_in_cam_pose’, the pose of the robot base in the camera coordinate system is re-
turned in DataValue. For ItemType=’calib_obj’ and DataName=’obj_in_tool_pose’, the pose of the
calibration object in the robot tool coordinate system is returned in DataValue.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of
’obj_in_tool_pose’. To eliminate this ambiguity the Z translation of ’obj_in_tool_pose’ is internally set
to 0.0 and the ’base_in_cam_pose’ is calculated accordingly. It is necessary to determine the true translation
in Z after the calibration (see calibrate_hand_eye).
The standard deviations and the covariance matrices of the 6 pose parameters of both poses can be
queried with ’base_in_cam_pose_deviations’, ’base_in_cam_pose_covariances’ (ItemType=’camera’),
’obj_in_tool_pose_deviations’, and ’obj_in_tool_pose_covariances’ (ItemType=’calib_obj’). Like poses,
they are specified in the units [m] and [°].
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. ItemType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of calibration data item.
Default: ’camera’
List of values: ItemType ∈ {’model’, ’camera’, ’calib_obj’, ’calib_obj_pose’, ’tool’}
. ItemIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the affected item (depending on the selected ItemType).
Default: 0
Suggested values: ItemIdx ∈ {0, 1, 2, ’general’}
. DataName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
The name of the inspected data.
Default: ’params’
List of values: DataName ∈ {’type’, ’reference_camera’, ’num_cameras’, ’num_calib_objs’,
’camera_setup_model’, ’camera_calib_error’, ’camera_calib_error_corrected_tool’, ’hand_eye_calib_error’,
’hand_eye_calib_error_corrected_tool’, ’optimization_method’, ’num_marks’, ’x’, ’y’, ’z’, ’params’, ’pose’,
’init_params’, ’init_pose’, ’params_deviations’, ’params_covariances’, ’params_labels’, ’pose_labels’,
’calib_settings’, ’excluded_settings’, ’common_motion_vector’, ’tool_in_cam_pose’, ’obj_in_base_pose’,
’base_in_cam_pose’, ’obj_in_tool_pose’, ’tool_in_base_pose’, ’tool_in_cam_pose_deviations’,
’obj_in_base_pose_deviations’, ’base_in_cam_pose_deviations’, ’obj_in_tool_pose_deviations’,
’tool_in_cam_pose_covariances’, ’obj_in_base_pose_covariances’, ’base_in_cam_pose_covariances’,
’obj_in_tool_pose_covariances’, ’tool_translation_deviation’, ’tool_rotation_deviation’,
’tool_in_base_pose_corrected’}

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 451

. DataValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / integer / string


Requested data.
Example

* Get the camera type of camera 0.


get_calib_data (CalibDataID, 'camera', 0, 'type', CameraType)

* Get the optimized (calibrated) pose of pose 1 of the


* calibration object 2.
get_calib_data (CalibDataID, 'calib_obj_pose', [2,1], 'pose', CalobjPose)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
calibrate_cameras, calibrate_hand_eye, create_calib_data, read_calib_data
Module
Calibration

get_calib_data_observ_contours ( : Contours : CalibDataID,


ContourName, CameraIdx, CalibObjIdx, CalibObjPoseIdx : )

Get contour-based observation data from a calibration data model.


The operator get_calib_data_observ_contours reads contour-based observation data from a calibra-
tion data model CalibDataID and returns it in Contours. These contours result from a preceding call of
find_calib_object. The parameters CameraIdx, CalibObjIdx, and CalibObjPoseIdx are indices
of the observing camera, calibration plate, and calibration object pose. Together, they specify an observation from
the calibration model. Note that if an observation exists, but it was stored in the calibration model CalibDataID
by set_calib_data_observ_points, no contour-based results can be returned.
By setting ContourName to one of the following values, you can select the specific type of the contour results:

’marks’: The contours of the calibration plate marks.


’marks_with_hole’: The calibration plate marks which contain a hole. In this case, the output returned in
Contours are regions instead of contours.
’caltab’: The contour of the calibration plate finder pattern.
’last_caltab’: The contour of the calibration plate finder pattern, which has been extracted by the last suc-
cessful preceding call to find_calib_object. Note that the observation of the successful call
to find_calib_object is used and consequently the values in CameraIdx, CalibObjIdx, and
CalibObjPoseIdx are ignored.

The mentioned finder pattern depends on the calibration plate:

• Calibration plates with hexagonally arranged marks: Special mark hexagon (i.e., a mark and its six neighbors)
where either four or six marks contain a hole, see create_caltab.
• Calibration plates with rectangularly arranged marks: The border of the calibration plate with a triangle in
one corner.

HALCON 24.11.1.0
452 CHAPTER 6 CALIBRATION

Parameters
. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Contour-based result(s).
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. ContourName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of contour objects to be returned.
Default: ’marks’
List of values: ContourName ∈ {’marks’, ’caltab’, ’last_caltab’, ’marks_with_hole’}
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration plate.
Default: 0
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object pose.
Default: 0
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Calibration

get_calib_data_observ_points ( : : CalibDataID, CameraIdx,


CalibObjIdx, CalibObjPoseIdx : Row, Column, Index, Pose )

Get point-based observation data from a calibration data model.


The operator get_calib_data_observ_points reads point-based observation data
from a calibration data model CalibDataID. This operator reads back observation
data stored by set_calib_data_observ_points or find_calib_object. See
set_calib_data_observ_points for a detailed description of the arguments.
Please note that if set_calib_data_observ_points is used for the calibration, the returned values of
Row and Column are the original values that have been set with set_calib_data_observ_points.
Similarly, if find_calib_object is used for the extraction, the values of Row and Column returned
by get_calib_data_observ_points coincide with the coordinates of the detected points computed by
find_calib_object.
Note that get_calib_data_observ_points returns the pose of an uncalibrated model. To get the pose of
a calibrated model, use get_calib_data.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object.
Default: 0
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object pose.
Default: 0

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 453

. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer


Row coordinates of the detected points.
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of the detected points.
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / real
Correspondence of the detected points to the points of the observed calibration object.
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Roughly estimated pose of the observed calibration object relative to the observing camera.
Number of elements: 7
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Calibration

get_camera_setup_param ( : : CameraSetupModelID, CameraIdx,


GenParamName : GenParamValue )

Get generic camera setup model parameters.


The operator get_camera_setup_param can be used to inspect diverse generic parameters of the camera
setup model CameraSetupModelID. Two types of parameters can be queried with this operator:
General parameters:
By setting CameraIdx to ’general’ and GenParamName to one of the following values, general camera setup
parameters are returned in GenParamValue:

’num_cameras’: Number of cameras described in the model. The number of cameras is fixed with the creation of
the camera setup model and cannot be changed after that (see create_camera_setup_model).
’camera_calib_error’: The root mean square error (RMSE) of the back projection of the optimization of the
camera system. This error is identical with the error returned by calibrate_cameras.
’reference_camera’: Returns the index of the camera that has been defined as reference camera within the sys-
tem. If no reference camera has been specified using set_camera_setup_param, the index 0 is re-
turned. If the coordinate system has been moved by setting a pose with the parameter ’coord_transf_pose’
in set_camera_setup_param, the origin of the coordinate system is not located in any of the available
cameras. Therefore, the index -1 is returned.
’coord_transf_pose’: Returns the pose in which the coordinate system of the setup has been moved. Please
note that after setting a reference camera with set_camera_setup_param, the pose of this camera
is returned. Adjusting this coordinate system subsequently using the parameter ’coord_transf_pose’ in
set_camera_setup_param yields a pose that corresponds to the location and orientation of the desired
coordinate system relative to the current one.

Camera parameters:
By setting CameraIdx to a valid setup camera index (a value between 0 and NumCameras-1) and
GenParamName to one of the following values, camera-specific parameters are returned in GenParamValue:

’type’: Camera type (see set_camera_setup_cam_param).


’params’: A tuple with internal camera parameters. The length of the tuple depends on the camera type.
’params_deviations’: A tuple representing the standard deviations of the internal camera parameters. The length
of the tuple depends on the camera type.
’params_covariances’: A tuple representing the covariance matrix if the internal camera parameters. The length
of the tuple depends on the camera type.

HALCON 24.11.1.0
454 CHAPTER 6 CALIBRATION

’pose’: Camera pose relative to the setup’s coordinate system (see create_camera_setup_model for more
details).

Note that the camera needs to be set first by set_camera_setup_cam_param, before any of its parameters
can be inspected by get_camera_setup_param. If CameraIdx is an index of an undefined camera, the
operator returns an error.
For more information about the calibration process of your camera setup see the chapter Calibration.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the camera in the setup.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2, ’general’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Names of the generic parameters to be queried.
List of values: GenParamName ∈ {’camera_calib_error’, ’type’, ’params’, ’params_deviations’,
’params_covariances’, ’pose’, ’reference_camera’, ’coord_transf_pose’, ’num_cameras’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / integer / string
Values of the generic parameters to be queried.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Module
Calibration

query_calib_data_observ_indices ( : : CalibDataID, ItemType,


ItemIdx : Index1, Index2 )

Query information about the relations between cameras, calibration objects, and calibration object poses.
A calibration data model (CalibDataID) contains a collection of observations, which are added to the model
by set_calib_data_observ_points. Each observation is associated to an observing camera, an observed
calibration object, and a calibration object pose. With the operator query_calib_data_observ_indices,
you can query observation indices associated to a camera or an calibration object, depending on the parameter
ItemType.
For ItemType=’camera’, you must pass a valid camera index in ItemIdx. Then, Index1 returns a list of
calibration object indices and Index2 returns a list of pose indices. Each pair [Index1[I],Index2[I]]
represents calibration object pose that are ’observed’ by camera ItemIdx.
For ItemType=’calib_obj’, you must specify a valid calibration object index in ItemIdx. Then, Index1
returns a list of camera indices and Index2 returns a list of corresponding calibration object pose indices. Each
pair [Index1[I],Index2[I]] denotes that camera Index1[I] is observing the Index2[I]th pose of
calibration object ItemIdx.
This operator is particularly suitable for accessing observation data of a calibration data model whose configuration
is unknown at the moment of its usage (e.g., if it was just read from a file). As a special case, this operator can be
used to get the precise list of poses of one calibration object (see the example).
Parameters

. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle


Handle of a calibration data model.

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 455

. ItemType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string


Kind of referred object.
Default: ’camera’
List of values: ItemType ∈ {’camera’, ’calib_obj’}
. ItemIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Camera index or calibration object index (depending on the selected ItemType).
Default: 0
Suggested values: ItemIdx ∈ {0, 1, 2}
. Index1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
List of calibration object indices or list of camera indices (depending on ItemType).
. Index2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Calibration object numbers.
Example

* Read a calibration model from a file.


read_calib_data ('calib_data.ccd',CalibDataID)

* Get calibration object indices assigned to calibration object 0.


query_calib_data_observ_indices (CalibDataID, 'calib_obj', 0, _, \
CalibObjPoseIndices)
* CalibObjPoseIndices contains the list of pose indices of calibration
* object 0. In order to be stored in the model, each calibration object
* needs to be observed by at least one camera in the setup (a calibration
* object pose that is not observed by any camera cannot be stored in
* the model). Typically, a calibration object pose can be observed by more
* than one camera. Hence, some calibration object pose indices might appear
* repeatedly in CalibObjPoseIndices. We use tuple_sort and tuple_uniq to
* extract a unique list of calibration object pose indices for calibration
* object 0.
tuple_sort (CalibObjPoseIndices, CalibObjPoseIndices)
tuple_uniq (CalibObjPoseIndices, CalibObjPoseIndices)

* Get poses of calibration objects observed by camera 2.


calibrate_cameras (CalibDataID, Error)
query_calib_data_observ_indices (CalibDataID, 'camera', 2, CalibObjIndices,\
CalibObjPoseIndices)
for I := 0 to |CalibObjIndices|-1 by 1
get_calib_data (CalibDataID, 'calib_obj_pose', \
[CalibObjIndices[I], CalibObjPoseIndices[I]], \
'pose', CalibObjPose)
endfor

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CalibDataID

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Module
Calibration

HALCON 24.11.1.0
456 CHAPTER 6 CALIBRATION

read_calib_data ( : : FileName : CalibDataID )

Restore a calibration data model from a file.


The operator read_calib_data restores a calibration data model from a file specified by its FileName
and returns a handle to the restored model in CalibDataID. The model file must have been created by
write_calib_data.
Note that write_calib_data does not store any calibration results into the file. Yet, calibrate_cameras
can be called for a fully configured calibration model immediately after the reading. All calibration results are
accessible afterwards.
Parameters

. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string


The path and file name of the model file.
File extension: .ccd
. CalibDataID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Module
Calibration

read_camera_setup_model ( : : FileName : CameraSetupModelID )

Restore a camera setup model from a file.


The operator read_camera_setup_model restores a camera setup model from a file specified by its
FileName and returns a handle to the restored model in CameraSetupModelID. The model file must have
been created by write_camera_setup_model.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
The path and file name of the model file.
File extension: .csm
. CameraSetupModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Module
Calibration

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 457

remove_calib_data ( : : CalibDataID, ItemType, ItemIdx : )

Remove a data set from a calibration data model.


The operator remove_calib_data, removes data from the calibration data model CalibDataID. Currently,
only the hand-eye calibration data set can be altered. With ItemType=’tool’, you can remove the pose of the
robot tool (in robot base coordinates), which was used to obtain the observation of the pose of the calibration
object with the same index ItemIdx (corresponds to the parameter CalibObjPoseIdx of any of the operators
find_calib_object, set_calib_data_observ_pose, or set_calib_data_observ_pose).
Note, that the corresponding observation of the calibration object with the same index ItemIdx that was pre-
viously set in the model also has to be removed. Otherwise, the operator calibrate_hand_eye will report an
error.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. ItemType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the calibration data item.
Default: ’tool’
List of values: ItemType ∈ {’tool’}
. ItemIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / string
Index of the affected item.
Default: 0
Suggested values: ItemIdx ∈ {0, 1, 2}
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_calib_data, remove_calib_data_observ
Possible Successors
calibrate_hand_eye
See also
calibrate_cameras
Module
Calibration

remove_calib_data_observ ( : : CalibDataID, CameraIdx,


CalibObjIdx, CalibObjPoseIdx : )

Remove observation data from a calibration data model.


The operator remove_calib_data_observ removes observations that were set in a calibration
data model CalibDataID using find_calib_object, set_calib_data_observ_points,
or set_calib_data_observ_pose. The parameters CameraIdx, CalibObjIdx, and
CalibObjPoseIdx should specify a valid observation from the calibration model. Note that if the cali-
bration data model CalibDataID is used in calibrate_hand_eye, the corresponding tool pose also has to
be deleted using remove_calib_data.

HALCON 24.11.1.0
458 CHAPTER 6 CALIBRATION

Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object.
Default: 0
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object pose.
Default: 0
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_calib_object, set_calib_data_observ_points, set_calib_data_observ_pose
Possible Successors
remove_calib_data, calibrate_cameras, calibrate_hand_eye
Module
Calibration

serialize_calib_data ( : : CalibDataID : SerializedItemHandle )

Serialize a calibration data model.


serialize_calib_data serializes the data of a calibration data model (see fwrite_serialized_item
for an introduction of the basic principle of serialization). The same data that is written in a file by
write_calib_data is converted to a serialized item. The calibration data model is defined by the handle
CalibDataID. The serialized calibration data model is returned by the handle SerializedItemHandle
and can be deserialized by deserialize_calib_data.
Note that no calibration results are serialized. You can access them with the operator get_calib_data, either
as individual items or in form of a camera setup model and store them separately.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_calib_data returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 459

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
This operator modifies the state of the following input parameter:

• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_calib_data
Module
Calibration

serialize_camera_setup_model (
: : CameraSetupModelID : SerializedItemHandle )

Serialize a camera setup model.


serialize_camera_setup_model serializes the data of a camera setup model (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data
that is written in a file by write_camera_setup_model is converted to a serialized item. The camera setup
model is defined by the handle CameraSetupModelID. The serialized camera setup model is returned by the
handle SerializedItemHandle and can be deserialized by deserialize_camera_setup_model.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_camera_setup_model returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_camera_setup_model
Module
Calibration

set_calib_data ( : : CalibDataID, ItemType, ItemIdx, DataName,


DataValue : )

Set data in a calibration data model.


With the operator set_calib_data, you can set data in the calibration data model CalibDataID.
Note that an overview on how the calibration data model is filled with data during the processes of camera calibra-
tion and hand-eye calibration is provided by the description of get_calib_data.
The calibration data model can contain various kinds of data. How to set specific data in the calibration data model
is described for different categories of data:

HALCON 24.11.1.0
460 CHAPTER 6 CALIBRATION

• Model-related data (ItemType ’model’)


• Camera-related data (ItemType ’camera’)
• Data related to calibration object poses (ItemType ’calib_obj_pose’)
• Hand-eye calibration related data (ItemType ’tool’)

The parameter ItemIdx lets you select whether the new value should be set for all items of a type or only for an
individual one. The parameters to set are passed in DataName, their values in DataValue.
To get detailed information about the calibration process of your camera setup see the chapter Calibration.
Model-related data

ItemType=’model’: ItemIdx must be set to ’general’.


Depending on the selection in DataName, you can set the following model-related parameters to the value
passed in DataValue:
’reference_camera’: Set the reference camera for the calibration model to the passed camera index. All
poses stored in the calibration data model are specified in the coordinate system of the reference camera
(see get_calib_data).
’common_motion_vector’: For stereo setups with telecentric line scan cameras, a string with a Boolean value
(i.e., ’true’ or ’false’) that determines whether the cameras have a common motion vector.
’optimization_method’: Set the optimization method to be used in the hand-eye calibration pro-
cess. If DataValue=’linear’ is set, a linear method is used for the hand-eye calibration. If
DataValue=’nonlinear’ is set, a nonlinear method is used for the hand-eye calibration. If
DataValue=’stochastic’ is set, a method is used which also takes the uncertainty of measured ob-
servations into account (see calibrate_hand_eye for more details).

Camera-related data

ItemType=’camera’: ItemIdx determines, if data is set for all cameras in general or for a specific camera.
With ItemIdx=’general’, the new settings are applied to all cameras in the model. If you pass a valid
camera index instead, i.e., a number between 0 and NumCameras-1 (NumCameras is specified during
model creation with create_calib_data), only the specified camera is affected by the changes.
By selecting the following parameters in DataName, you can specify which camera parameters shall be
optimized during the calibration performed by calibrate_cameras:
’calib_settings’: The camera parameters listed in DataValue are marked for optimization for the affected
camera(s) (additionally to the camera parameters that were already marked for optimization). Note that
by default, all parameters are marked for the optimization. That is, ’calib_settings’ is mainly suited to
add previously excluded parameters again.
’excluded_settings’: The camera parameters listed in DataValue are excluded from the optimization for
the affected camera(s).
The following camera parameters can be passed in DataValue. See Calibration for affected camera types
and further details about the parameters.
Internal camera parameters
’focus’: Focal length of the lens.
’magnification’: Magnification of the lens.
’kappa’: Divisional distortion coefficient kappa.
’k1’,’k2’,’k3’: Polynomial radial distortion parameters.
’poly_tan_2’: An alias parameter for all polynomial tangential distortion parameters, i.e., p1 and p2.
’poly’: An alias parameter for all polynomial distortion parameters, i.e., k1, k2, k3, p1, and p2.
’image_plane_dist’: The distance of the tilted image plane from the perspective projection center.
’tilt’: Tilt and rotation of the tilt lens.
’cx’,’cy’: Coordinates of the camera’s principal point.
’principal_point’: An alias parameter for ’cx’ and ’cy’.
’sx’,’sy’: Sensor element dimensions.
’params’: All internal camera parameters.
External camera parameters

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 461

’alpha’,’beta’,’gamma’: Rotation part of the camera pose.


’transx’,’transy’,’transz’: Translation part of the camera pose.
’pose’: All external camera parameters.
Further camera parameters
’vx’,’vy’,’vz’: Motion vector parameters. Note that for stereo setups with telecentric line scan cameras with a
common motion vector (i.e., ’common_motion_vector’ = ’true’), the motion vector optimization param-
eters of the reference camera determine which parameters of the common motion vector are optimized.
For this kind of setup, it is recommended not to exclude any motion vector parameters from the opti-
mization.
’all’: All camera parameters.
By default, all parameters are marked for calibration. As an exception, the camera pose is excluded from
the optimization for calibration setups with only one camera (NumCameras=1). This setting makes the
calibration process equivalent to the one performed by camera_calibration.

Data related to calibration object poses

ItemType=’calib_obj_pose’: ItemIdx determines, if data is set for all calibration object poses in general or
for a specific calibration object pose. With ItemIdx=’general’ the new settings are applied to all calibration
object poses in the model. If you pass a valid calibration object pose index instead, i.e., a tuple containing a
valid index pair [CalibObjIdx, CalibObjPoseIdx], you specify a calibration object pose, which is
affected by the changes.
By selecting the following parameters in DataName, you can specify which calibration object pose pa-
rameters shall be optimized during the calibration performed by calibrate_cameras:
’calib_settings’: The calibration object pose settings listed in DataValue are marked for optimization for
the affected pose(s). Note that by default, all calibration pose parameters are marked for the optimization.
That is, ’calib_settings’ is mainly suited to add previously excluded parameters again.
’excluded_settings’: The calibration object pose settings listed in DataValue are excluded from the opti-
mization for the affected pose(s).
The following calibration pose parameters can be passed in DataValue:
’alpha’,’beta’,’gamma’: Rotation part of the calibration object pose.
’transx’,’transy’,’transz’: Translation part of the calibration object pose.
’pose’: All calibration object pose parameters.
’all’: All calibration objects optimization parameters, i.e., the same as ’pose’.
By default all parameters are marked for calibration.
The current settings for any model item can be queried with the operator get_calib_data.

Hand-eye calibration related data

ItemType=’tool’: ItemIdx must be set to a valid calibration object pose index.


By selecting the following parameter in DataName, you can set the pose of the robot tool:
’tool_in_base_pose’: Set the pose of the robot tool (in robot base coordinates), which was used to ob-
tain the observation of the pose of the calibration object with the same index ItemIdx (corre-
sponds to the parameter CalibObjPoseIdx of any of the operators find_calib_object,
set_calib_data_observ_pose, or set_calib_data_observ_points).

Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. ItemType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of calibration data item.
Default: ’model’
List of values: ItemType ∈ {’model’, ’camera’, ’calib_obj_pose’, ’tool’}
. ItemIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / string
Index of the affected item (depending on the selected ItemType).
Default: ’general’
Suggested values: ItemIdx ∈ {0, 1, 2, ’general’}

HALCON 24.11.1.0
462 CHAPTER 6 CALIBRATION

. DataName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string


Parameter(s) to set.
Default: ’reference_camera’
List of values: DataName ∈ {’reference_camera’, ’calib_settings’, ’excluded_settings’,
’common_motion_vector’, ’optimization_method’, ’tool_in_base_pose’}
. DataValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer
New value(s).
Default: 0
Suggested values: DataValue ∈ {0, 1, 2, ’all’, ’pose’, ’params’, ’alpha’, ’beta’, ’gamma’, ’transx’, ’transy’,
’transz’, ’focus’, ’magnification’, ’kappa’, ’poly’, ’poly_tan_2’, ’k1’, ’k2’, ’k3’, ’image_plane_dist’, ’tilt’,
’principal_point’, ’cx’, ’cy’, ’sx’, ’sy’, ’vx’, ’vy’, ’vz’, ’true’, ’false’, ’linear’, ’nonlinear’, ’stochastic’}
Example

* Here, the cell size is known exactly, thus it is excluded from


* the optimization.
set_calib_data (CalibDataID, 'camera', 'general', 'excluded_settings', \
['sx','sy'])

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_calib_data_observ_points, find_calib_object
Possible Successors
calibrate_cameras, calibrate_hand_eye
Module
Calibration

set_calib_data_calib_object ( : : CalibDataID, CalibObjIdx,


CalibObjDescr : )

Define a calibration object in a calibration model.


The operator set_calib_data_calib_object defines the calibration object with the index
CalibObjIdx in the camera calibration data model CalibDataID. The index must be between 0 and
NumCalibObjects-1 (NumCalibObjects is specified during model creation with create_calib_data
and can be queried with get_calib_data).
If a calibration object description with index CalibObjIdx is already defined, then the current object description
overwrites it (the description is ’substituted’). Note that all NumCalibObjects calibration objects must be set
to perform calibrate_cameras.
The parameter CalibObjDescr can be used in two ways:

as a file name: it specifies a calibration plate description file as created with create_caltab or
gen_caltab.
as a numerical tuple: it specifies the 3D coordinates of all points of the calibration object. All X, Y, and Z
coordinates, respectively, of all points must be packed sequentially in the tuple in form: [X, Y, Z], i.e.,
[X1, ..., Xn, Y1, ..., Yn, Z1, ..., Zn], where |X| = |Y| = |Z| and all coordinates
are in meters.

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 463

To query the calibration objects parameters stored earlier in a calibration data model, use get_calib_data.
To get detailed information about the calibration process of your camera setup see the chapter Calibration.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Calibration object index.
Default: 0
Suggested values: CalibObjIdx ∈ {0, 1, 2}
. CalibObjDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
3D point coordinates or a description file name.
List of values: CalibObjDescr ∈ {’calplate.cpd’, ’calplate_5mm.cpd’, ’calplate_10mm.cpd’,
’calplate_20mm.cpd’, ’calplate_40mm.cpd’, ’calplate_80mm.cpd’, ’calplate_160mm.cpd’,
’calplate_320mm.cpd’, ’calplate_640mm.cpd’, ’calplate_1200mm.cpd’, ’calplate_20mm_dark_on_light.cpd’,
’calplate_40mm_dark_on_light.cpd’, ’calplate_80mm_dark_on_light.cpd’, ’caltab.descr’,
’caltab_650um.descr’, ’caltab_2500um.descr’, ’caltab_6mm.descr’, ’caltab_10mm.descr’,
’caltab_30mm.descr’, ’caltab_100mm.descr’, ’caltab_200mm.descr’, ’caltab_800mm.descr’,
’caltab_small.descr’, ’caltab_big.descr’}
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_calib_data, set_calib_data_cam_param
Possible Successors
set_calib_data_cam_param, set_calib_data_observ_points, find_calib_object
Module
Calibration

set_calib_data_cam_param ( : : CalibDataID, CameraIdx,


CameraType, CameraParam : )

Set type and initial parameters of a camera in a calibration data model.


The operator set_calib_data_cam_param sets the initial camera parameters CameraParam for
the camera with the index CameraIdx in the calibration data model CalibDataID. The parameter
CameraIdx must be between 0 and NumCameras-1 (NumCameras is specified during model creation with
create_calib_data and can be queried with get_calib_data). If a camera with CameraIdx was al-
ready defined, its parameters are overwritten by the current ones (the camera is substituted). In this case, the
selection which camera parameters are marked for optimization is reset and maybe has to be set again. Note that
all NumCameras cameras must be set to perform calibrate_cameras. The calibration procedure refines
these initial parameters. You can find further information about the calibration process of different camera setups
in Calibration.
The parameter CameraType is only provided for backwards compatibility. The information about the camera
type is contained in the first element of CameraParam. Therefore, CameraType should be set either to its
default value [] (the recommended option) or to the same value as the first element of CameraParam. In any
other case an error is raised.

HALCON 24.11.1.0
464 CHAPTER 6 CALIBRATION

An overview of all available camera types and their respective parameters is given in CameraParam.
The camera type can be queried later by calling get_calib_data with the arguments ItemType=’camera’
and DataName=’type’. The initial camera parameters can be queried by calling get_calib_data with argu-
ments ItemType=’camera’ and DataName=’init_params’.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / string
Camera index.
Default: 0
Suggested values: CameraIdx ∈ {’all’, 0, 1, 2}
. CameraType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Type of the camera.
Default: []
List of values: CameraType ∈ {[]}
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial camera internal parameters.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_calib_data, set_calib_data_calib_object
Possible Successors
set_calib_data_calib_object, set_calib_data_observ_points, find_calib_object
Module
Calibration

set_calib_data_observ_points ( : : CalibDataID, CameraIdx,


CalibObjIdx, CalibObjPoseIdx, Row, Column, Index, Pose : )

Set point-based observation data in a calibration data model.


For a calibration model of type CalibSetup=’calibration_object’ (see create_calib_data), cameras are
calibrated based on so-called observations of calibration objects. With set_calib_data_observ_points,
you store such an observation in the calibration data model CalibDataID. An observation consists of the fol-
lowing data:

CameraIdx: index of the observing camera


CalibObjIdx: index of the observed calibration object
CalibObjPoseIdx: index of the observed pose of the calibration object. You can choose it freely, without
following a strict order. If you specify an index that already exists for the calibration object CalibObjIdx,
the corresponding observation data is replaced by the new one. Of course, the same index can be assigned to
poses of different calibration objects.

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 465

Row, Column, Index: Extracted image coordinates and corresponding index of the calibration marks of the
calibration object. Row and Column are tuples containing the same number of elements. Index can either
contain a tuple (of the same length) or the value ’all’, indicating that the points [Row, Column] correspond
in a one-to-one relation to the calibration marks of the calibration object. If the number of row or column
coordinates does not match the number of calibration marks, a corresponding error message is returned.
Pose: A roughly estimated pose of the observed calibration object relative to observing camera.

If you are using the HALCON calibration plate, it is recommended to use find_calib_object instead of
set_calib_data_observ_points, since the contour information, which it stores in the calibration data
model, enables a more precise calibration procedure with calibrate_cameras.
The observation data can be accessed later by calling get_calib_data_observ_points using the same
values for the arguments CameraIdx, CalibObjIdx, and CalibObjPoseIdx.
Parameters

. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle


Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2}
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the calibration object.
Default: 0
Suggested values: CalibObjIdx ∈ {0, 1, 2}
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object.
Default: 0
Suggested values: CalibObjPoseIdx ∈ {0, 1, 2}
Restriction: CalibObjPoseIdx >= 0
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of the extracted points.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of the extracted points.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / string
Correspondence of the extracted points to the calibration marks of the observed calibration object.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Roughly estimated pose of the observed calibration object relative to the observing camera.
Number of elements: 7
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• CalibDataID

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_marks_and_pose, set_calib_data_cam_param, set_calib_data_calib_object
Possible Successors
set_calib_data, calibrate_cameras

HALCON 24.11.1.0
466 CHAPTER 6 CALIBRATION

Alternatives
find_calib_object
Module
Calibration

set_camera_setup_cam_param ( : : CameraSetupModelID, CameraIdx,


CameraType, CameraParam, CameraPose : )

Define type, parameters, and relative pose of a camera in a camera setup model.
The operator set_camera_setup_cam_param defines the internal parameters and the pose of the camera
with CameraIdx in the camera setup model CameraSetupModelID. The parameter CameraIdx must be
between 0 and NumCameras-1 (see get_camera_setup_param with argument ’num_cameras’). If a cam-
era with CameraIdx was already defined, its parameters are overwritten by the current ones (the camera is
’substituted’).
The number of values in CameraParam depends on the camera type. See the description of
set_calib_data_cam_param for a list of values and Calibration for details on camera types and camera
parameters.
The parameter CameraType is only provided for backwards compatibility. The information about the camera
type is contained in the first element of CameraParam. Therefore, CameraType should be set either to its
default value [] (the recommended option) or to the same value as the first element of CameraParam. In any
other case an error is raised.
The parameter CameraPose specifies the pose of the camera relative to the setup’s coordinate system (see
set_camera_setup_param for further explanations on the setup’s coordinate system).
All of the parameters set by set_camera_setup_cam_param can be read back by
get_camera_setup_param. While the camera type can be changed only with a new
call to set_camera_setup_cam_param, all other camera parameters can be modified by
set_camera_setup_param. Furthermore, set_camera_setup_param can set additional data to
a camera: standard deviations or covariances of the internal camera parameters.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Index of the camera in the setup.
Suggested values: CameraIdx ∈ {0, 1, 2}
. CameraType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Type of the camera.
Default: []
List of values: CameraType ∈ {[]}
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CameraPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Pose of the camera relative to the setup’s coordinate system.
Number of elements: 7
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Module
Calibration

HALCON/HDevelop Reference Manual, 2024-11-13


6.7. MULTI-VIEW 467

set_camera_setup_param ( : : CameraSetupModelID, CameraIdx,


GenParamName, GenParamValue : )

Set generic camera setup model parameters.


The operator set_camera_setup_param can be used to set diverse generic parameters or transformations for
the camera setup model CameraSetupModelID. Two types of parameters can be set with this operator:
Coordinate system of the setup and transformation of camera poses:
By setting CameraIdx to ’general’ and GenParamName to one of the following values, you can perform the
following general pose transformation for all cameras:

’reference_camera’: When setting GenParamValue to a valid camera index, all camera poses are recomputed
relative to the coordinate system of this camera.
’coord_transf_pose’: When passing a tuple in HALCON pose format in GenParamValue, the current coordi-
nate system is moved into this pose. The pose in GenParamValue represents the location and orientation
of the desired coordinate system relative to the current one. All camera poses are recomputed relative to the
new coordinate system.

The recomputed camera poses can be inspected with the operator get_camera_setup_param.
Camera parameters:
By setting CameraIdx to a valid setup camera index (a value between 0 and NumCameras-1) and
GenParamName to one of the following values, camera specific parameters can be set with GenParamValue:

’params’: A tuple with internal camera parameters.


’params_deviations’: A tuple with the standard deviations of the internal camera parameters except for
CameraType, Width, and Height, thus |params_deviations|=|params|-3. The internal cam-
era parameters are camera-type dependent. See the description of set_calib_data_cam_param for a
list of values and calibrate_cameras for details on camera types and camera parameters.
’params_covariances’: A tuple with the covariance matrix of the internal camera parameters. The tuple must rep-
resent a square matrix whose both dimensions are identical to the number of standard deviation values, thus
|params_covariances|=|params_deviations|2 =(|params|-3)2 , see ’params_deviations’.
’pose’: A tuple representing the pose of the camera in HALCON pose format, relative to camera setup’s coordinate
system. See the above section for further details.

Note that the camera must already be defined in the model, before any of its parameters can be changed by
set_camera_setup_param. If CameraIdx is an index of a undefined camera, the operator returns an error.
All parameters can be read back by get_camera_setup_param.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Unique index of the camera in the setup.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2, ’general’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Names of the generic parameters to be set.
List of values: GenParamName ∈ {’params’, ’params_deviations’, ’params_covariances’, ’pose’,
’reference_camera’, ’coord_transf_pose’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / integer / string
Values of the generic parameters to be set.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
468 CHAPTER 6 CALIBRATION

Possible Predecessors
create_camera_setup_model, read_camera_setup_model
Module
Calibration

write_calib_data ( : : CalibDataID, FileName : )

Store a calibration data model into a file.


The operator write_calib_data stores a calibration data model CalibDataID into a file specified by its
file name FileName. The information stored in the file includes:

• initial camera parameters


• calibration object descriptions
• observation data
• model settings: generic and specific optimization parameters for both cameras and calibration object poses.

Note that no calibration results are stored in the file. You can access them with the operator get_calib_data,
either as individual items or in form of a camera setup model and store them separately.
The calibration data model can be later read with read_calib_data.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
The file name of the model to be saved.
File extension: .ccd
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Module
Calibration

write_camera_setup_model ( : : CameraSetupModelID, FileName : )

Store a camera setup model into a file.


The operator write_camera_setup_model stores a camera setup model CameraSetupModelID into a
file specified by its file name FileName.
The calibration data model can be later read with read_camera_setup_model.

HALCON/HDevelop Reference Manual, 2024-11-13


6.8. PROJECTION 469

Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
The file name of the model to be saved.
File extension: .csm
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Module
Calibration

6.8 Projection

cam_par_pose_to_hom_mat3d ( : : CameraParam, Pose : HomMat3D )

Convert internal camera parameters and a 3D pose into a 3×4 projection matrix.
cam_par_pose_to_hom_mat3d converts the internal camera parameters CameraParam and the 3D pose
Pose, which represent the external camera parameters, into the 3×4 projection matrix HomMat3D, which can
be used to project points from 3D to 2D. The conversion can only be performed if the distortion coefficients in
CameraParam are 0. If necessary, change_radial_distortion_cam_par must be used to achieve this.
The internal camera parameters and the pose are typically obtained with calibrate_cameras.
Parameters
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose.
Number of elements: 7
. HomMat3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d ; real
3×4 projection matrix.
Result
cam_par_pose_to_hom_mat3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary,
an exception is raised
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
calibrate_cameras, change_radial_distortion_cam_par
Possible Successors
project_point_hom_mat3d, project_hom_point_hom_mat3d
See also
create_pose, hom_mat3d_to_pose, project_3d_point, get_line_of_sight
Module
Foundation

HALCON 24.11.1.0
470 CHAPTER 6 CALIBRATION

project_3d_point ( : : X, Y, Z, CameraParam : Row, Column )

Project 3D points into (sub-)pixel image coordinates.


project_3d_point projects one or more 3D points (with coordinates X, Y, and Z) into the image plane (in
pixels) and returns the result in Row and Column. The coordinates X, Y, and Z are given in the camera coordinate
system, i.e., they describe the position of the points relative to the camera.
The internal camera parameters CameraParam describe the projection characteristics of the camera (see Calibra-
tion for details).
Parameters
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x-array ; real
X coordinates of the 3D points to be projected in the camera coordinate system.
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y-array ; real
Y coordinates of the 3D points to be projected in the camera coordinate system.
. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z-array ; real
Z coordinates of the 3D points to be projected in the camera coordinate system.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Row (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinates of the projected points (in pixels).
. Column (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinates of the projected points (in pixels).
Example

* Set internal camera parameters and pose of the world coordinate


* system in camera coordinates.
* Note that, typically, these values are the result of a prior
* calibration.
gen_cam_par_area_scan_division (0.01, -731, 5.2e-006, 5.2e-006, \
654, 519, 1280, 1024, CameraParam)
create_pose (0.1, 0.2, 0.3, 40, 50, 60, \
'Rp+T', 'gba', 'point', WorldPose)
* Convert pose into transformation matrix.
pose_to_hom_mat3d(WorldPose, HomMat3D)
* Transform 3D points from world into the camera coordinate system.
affine_trans_point_3d(HomMat3D, [3.0, 3.2], [4.5, 4.5], [3.8, 4.2], X, Y, Z)
* Project 3D points into image.
project_3d_point(X, Y, Z, CameraParam, Row, Column)

Result
project_3d_point returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_cam_par, affine_trans_point_3d
Possible Successors
gen_region_points, gen_region_polygon, disp_polygon
See also
camera_calibration, disp_caltab, read_cam_par, get_line_of_sight,
affine_trans_point_3d, image_points_to_world_plane

HALCON/HDevelop Reference Manual, 2024-11-13


6.8. PROJECTION 471

Module
Calibration

project_hom_point_hom_mat3d ( : : HomMat3D, Px, Py, Pz,


Pw : Qx, Qy, Qw )

Project a homogeneous 3D point using a 3×4 projection matrix.


projective_trans_hom_point_3d applies the 3×4 projection matrix HomMat3D to all homogeneous in-
put points (Px,Py,Pz,Pw) and returns an array of homogeneous output points (Qx,Qy,Qw). The transformation
is described by the homogeneous transformation matrix given in HomMat3D. This corresponds to the following
equation (input and output points as homogeneous vectors):
 
  Px
Qx
 Qy  = HomMat3D ·  Py 
 
 Pz 
Qw
Pw

To transform the homogeneous coordinates to Euclidean coordinates, they must be divided by Qw:
!
  Qx
Ex Qw
= Qy
Ey Qw

This can be achieved directly by calling project_point_hom_mat3d. Thus,


project_hom_point_hom_mat3d is primarily useful for transforming points or point sets for which
the resulting points might lie on the line at infinity, i.e., points that potentially have Qw = 0, for which the above
division cannot be performed.
Note that, consistent with the conventions used by the projection in calibrate_cameras, Qx corresponds to
the column coordinate of an image and Qy corresponds to the row coordinate.
Parameters
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d ; real
3×4 projection matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (y coordinate).
. Pz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (z coordinate).
. Pw (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (w coordinate).
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Output point (x coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Output point (y coordinate).
. Qw (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Output point (w coordinate).
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

HALCON 24.11.1.0
472 CHAPTER 6 CALIBRATION

Possible Predecessors
cam_par_pose_to_hom_mat3d
Alternatives
project_point_hom_mat3d, project_3d_point
Module
Foundation

project_point_hom_mat3d ( : : HomMat3D, Px, Py, Pz : Qx, Qy )

Project a 3D point using a 3×4 projection matrix.


project_point_hom_mat3d applies the 3×4 projection matrix HomMat3D to all input points (Px,Py,Pz)
and returns an array of output points (Qx,Qy). The transformation is described by the 3×4 projection matrix given
in HomMat3D. This corresponds to the following equations (input and output points as homogeneous vectors):
 
  Px
Tx
 T y  = HomMat3D ·  Py 
 
 Pz 
Tw
1

project_point_hom_mat3d then transforms the homogeneous coordinates to Euclidean coordinates by di-


viding them by T w:
   Tx 
Qx Tw
= Ty
Qy Tw

If a point on the line at infinity (T w = 0) is created by the transformation, an error is returned. If this is undesired,
project_hom_point_hom_mat3d can be used.
Note that, consistent with the conventions used by the projection in calibrate_cameras, Qx corresponds to
the column coordinate of an image and Qy corresponds to the row coordinate.
Parameters
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d ; real
3×4 projection matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (y coordinate).
. Pz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (z coordinate).
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Output point (x coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Output point (y coordinate).
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
cam_par_pose_to_hom_mat3d
Alternatives
project_hom_point_hom_mat3d, project_3d_point
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


6.9. RECTIFICATION 473

6.9 Rectification

change_radial_distortion_cam_par ( : : Mode, CamParamIn,


DistortionCoeffs : CamParamOut )

Determine new camera parameters in accordance to the specified radial distortion.


change_radial_distortion_cam_par modifies the internal camera parameters in accordance to the spec-
ified radial distortion coefficients DistortionCoeffs. The operator can only be used for area scan cameras
(with any lens type) and for line scan cameras with telecentric lenses. Line scan cameras with perspective lenses
are not supported.
With the parameter Mode, one of the following modes can be selected:

’fixed’: Only the distortion coefficients are modified, the other internal camera parameters remain unchanged. In
general, this leads to a change of the visible part of the scene.
’fullsize’: For area scan cameras, the scale factors Sx andSy and the image center point (Cx , Cy )T are modified in
order to preserve the visible part of the scene. For line scan cameras with telecentric lenses, the scale factor
Sx , the image center point (Cx , Cy )T , and the Vy component of the motion vector are changed to achieve the
this effect. Thus, all points visible in the original image are also visible in the modified (rectified) image. In
general, this leads to undefined pixels in the modified image.
’adaptive’: A trade-off between the other modes: The visible part of the scene is slightly reduced to prevent
undefined pixels in the modified image. The same parameters as for ’fullsize’ are modified.
’preserve_resolution’: As in the mode ’fullsize’, all points visible in the original image are also visible in the
modified (rectified) image. For area scan cameras, the scale factors Sx and Sy and the image center point
(Cx , Cy )T are modified. For line scan cameras with telecentric lenses, the scale factor Sx , the image center
point (Cx , Cy )T , and potentially the Vy component of the motion vector are changed to achieve the this
effect. In general, this leads to undefined pixels in the modified image. In contrast to the mode ’fullsize’,
additionally the size of the modified image is increased such that the image resolution does not decrease in
any part of the image.

In all modes, the distortion coefficients in CamParamOut are set to DistortionCoeffs. For telecentric line
scan cameras, the motion vector also influences the percieved distortion. For example, a nonzero Vx component
leads to skewed pixels. Furthermore, if Vy 6= Sx /Magnification, the pixels appear to be non-square. Therefore, for
telecentric line scan cameras, up to three more components can be passed in addition to κ or (K1 , K2 , K3 , P1 , P2 ),
respectively, in DistortionCoeffs. These specify the new Vx , Vy , and Vz components of the motion vector.
The transformation of a pixel in the modified image into the image plane using CamParamOut results in the same
point as the transformation of a pixel in the original image via CamParamIn.
Parameters
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode
Default: ’adaptive’
Suggested values: Mode ∈ {’fullsize’, ’adaptive’, ’fixed’, ’preserve_resolution’}
. CamParamIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters (original).
. DistortionCoeffs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Desired radial distortions.
Number of elements: DistortionCoeffs == 1 || DistortionCoeffs == 5
Default: 0.0
. CamParamOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters (modified).
Result
change_radial_distortion_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct. If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON 24.11.1.0
474 CHAPTER 6 CALIBRATION

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
Possible Predecessors
camera_calibration, read_cam_par
Possible Successors
change_radial_distortion_image, change_radial_distortion_contours_xld,
gen_radial_distortion_map
See also
camera_calibration, read_cam_par, change_radial_distortion_image,
change_radial_distortion_contours_xld, change_radial_distortion_points
Module
Calibration

change_radial_distortion_contours_xld (
Contours : ContoursRectified : CamParamIn, CamParamOut : )

Change the radial distortion of contours.


change_radial_distortion_contours_xld changes the radial distortion of the input contours
Contours in accordance to the internal camera parameters CamParamIn and CamParamOut. Each subpixel
of an input contour is transformed into the image plane using CamParamIn and subsequently projected into a
subpixel of the corresponding contour in ContoursRectified using CamParamOut.
If CamParamOut was computed via change_radial_distortion_cam_par, the contours
ContoursRectified are equivalent to Contours obtained with a lens with a modified radial distor-
tion κ. If κ = 0 the contours are rectified. A subsequent pose estimation (determination of the external camera
parameters) is not affected by this operation.
Please note that change_radial_distortion_contours_xld does not work for line scan cameras with
perspective lenses.
Parameters
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Original contours.
. ContoursRectified (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Resulting contours with modified radial distortion.
. CamParamIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameter for Contours.
. CamParamOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameter for ContoursRectified.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
change_radial_distortion_cam_par, gen_contours_skeleton_xld, edges_sub_pix,
smooth_contours_xld
Possible Successors
gen_polygons_xld, smooth_contours_xld
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_image, change_radial_distortion_points
Module
Calibration

HALCON/HDevelop Reference Manual, 2024-11-13


6.9. RECTIFICATION 475

change_radial_distortion_image ( Image,
Region : ImageRectified : CamParamIn, CamParamOut : )

Change the radial distortion of an image.


change_radial_distortion_image changes the radial distortion of the input image Image in accordance
to the internal camera parameters CamParamIn and CamParamOut. Each pixel of the output image that lies
within the region Region is transformed into the image plane using CamParamOut and subsequently projected
into a subpixel of Image using CamParamIn. The resulting gray value is determined by bilinear interpolation. If
the subpixel is outside of Image, the corresponding pixel in ImageRectified is set to ’black’ and eliminated
from the image domain.
If the gray values of all pixels in the output image shall be calculated, it is sufficient to pass an empty object
in Region (which must be previously generated by, for example, using gen_empty_obj). This is especially
useful if the size of the output image differs from the size of the input image, and hence, it is not possible to simply
pass the region of the input image in Region.
If CamParamOut was computed via change_radial_distortion_cam_par, ImageRectified is
equivalent to Image obtained with a lens with a modified radial distortion κ. If κ = 0 the image is rectified.
A subsequent pose estimation (determination of the external camera parameters) is not affected by this operation.
Please note that change_radial_distortion_image does not work for line scan cameras with perspective
lenses. Instead, you might want to use image_to_world_plane.
Attention
change_radial_distortion_image can be executed on OpenCL devices if the input image does not ex-
ceed the maximum size of image objects of the selected device. As the OpenCL implementation uses single
precision arithmetic, the results can differ from the CPU implementation.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / uint2 / real
Original image.
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; object
Region of interest in ImageRectified.
. ImageRectified (output_object) . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / uint2 / real
Resulting image with modified radial distortion.
. CamParamIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameter for Image.
. CamParamOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameter for Image.
Result
change_radial_distortion_image returns 2 (H_MSG_TRUE) if all parameter values are cor-
rect. If the input is empty (no input image is available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Execution Information

• Supports OpenCL compute devices.


• Multithreading type: reentrant (runs in parallel with non-exclusive operators).
• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on channel level.
Possible Predecessors
change_radial_distortion_cam_par, read_image, grab_image
Possible Successors
edges_image, threshold
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_contours_xld, change_radial_distortion_points
Module
Calibration

HALCON 24.11.1.0
476 CHAPTER 6 CALIBRATION

change_radial_distortion_points ( : : Row, Col, CamParamIn,


CamParamOut : RowChanged, ColChanged )

Change the radial distortion of pixel coordinates.


change_radial_distortion_points changes the radial distortion of input image coordinates (Row,
Col) in accordance to the internal camera parameters CamParamIn and CamParamOut. Each input pixel
(Row, Col) is transformed into the image plane using CamParamIn and projected into another image using
CamParamOut.
Please note that change_radial_distortion_points does not work for line scan cameras with perspec-
tive lenses.
Parameters
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Original row component of pixel coordinates.
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Original column component of pixel coordinates.
. CamParamIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
The inner camera parameters of the camera used to create the input pixel coordinates.
. CamParamOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
The inner camera parameters of a camera.
. RowChanged (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Row component of pixel coordinates after changing the radial distortion.
. ColChanged (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Column component of pixel coordinates after changing the radial distortion.
Result
change_radial_distortion_points returns 2 (H_MSG_TRUE) if all parameter values are correct.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_contours_xld, change_radial_distortion_image
Module
Calibration

contour_to_world_plane_xld (
Contours : ContoursTrans : CameraParam, WorldPose, Scale : )

Transform an XLD contour into the plane z=0 of a world coordinate system.
The operator contour_to_world_plane_xld transforms contour points given in Contours into the plane
z=0 in a world coordinate system and returns the 3D contour points in ContoursTrans. The world coordinate
system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose. Hence, latter
one is expected in the form ccs Pwcs , where ccs denotes the camera coordinate system and wcs the world coordinate
system (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). In CameraParam
you must pass the internal camera parameters (see Calibration for the sequence of the parameters and the underly-
ing camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.

HALCON/HDevelop Reference Manual, 2024-11-13


6.9. RECTIFICATION 477

If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’um’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image point in the
camera coordinate system, taking into account the radial distortions. The line of sight is then transformed into the
world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the 3D
coordinates of the transformed contour ContoursTrans are obtained.
Parameters
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Input XLD contours to be transformed in image coordinates.
. ContoursTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Transformed XLD contours in world coordinates.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or dimension
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6, 0.0254, 0.3048,
0.9144}
Restriction: Scale > 0
Example

* Perform camera calibration (with standard calibration plate).


calibrate_cameras (CalibDataID, Error)
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Get reference pose (pose 2 of calibration object 0).
get_calib_data (CalibDataID, 'calib_obj_pose', [0,2], 'pose', WorldPose)
* Compensate thickness of plate.
set_origin_pose(ObjInCameraPose, 0, 0, 0.0006, WorldPose)
* Transform contours into world coordinate system (unit mm).
contour_to_world_plane_xld(Contours, ContoursTrans, CamParam, \
WorldPose, 'mm')

Result
contour_to_world_plane_xld returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
image_points_to_world_plane
Module
Calibration

HALCON 24.11.1.0
478 CHAPTER 6 CALIBRATION

gen_image_to_world_plane_map ( : Map : CameraParam, WorldPose,


WidthIn, HeightIn, WidthMapped, HeightMapped, Scale, MapType : )

Generate a projection map that describes the mapping between the image plane and the plane z=0 of a world
coordinate system.
gen_image_to_world_plane_map generates a projection map Map, which describes the mapping between
the image plane and the plane z=0 (plane of measurements) in a world coordinate system. This map can be used
to rectify an image with the operator map_image. The rectified image shows neither radial nor perspective
distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly onto the
plane of measurements. The world coordinate system (wcs) is chosen by passing its 3D pose relative to the camera
coordinate system (ccs) in WorldPose. Thus the pose is expected in the form ccs Pwcs (see Transformations
/ Poses and “Solution Guide III-C - 3D Vision”). In CameraParam you must pass the internal
camera parameters (see Calibration for the sequence of the parameters and the underlying camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
The size of the images to be mapped can be specified by the parameters WidthIn and HeightIn. The pixel
position of the upper left corner of the output image is determined by the origin of the world coordinate system.
The size of the output image can be chosen by the parameters WidthMapped, HeightMapped, and Scale.
WidthMapped and HeightMapped must be given in pixels.
The parameter Scale can be used to specify the size of a pixel in the transformed image. There are two ways to
use this parameter:

Scale pixels to metric units:


Scale the image such that one pixel in the transformed image corresponds to a metric unit, e.g., setting
’mm’ determines that a pixel in the transformed image corresponds to the area 1mm × 1mm in the plane of
measurements. For this, the original unit needs to be meters. This is the case if you use a standard calibration
plate.
List of values: ’m’, ’cm’, ’mm’, ’microns’, ’um’.
Default: ’m’.
Control scaling manually:
Scale the image by giving a number that determines the ratio of original unit length / desired number of
pixels. E.g., if your original unit is meters and you want every pixel of your transformed image to represent
3mm × 3mm of the measuring plane, your scale is calculated Scale = 0.003/1 = 0.003. If you want to
perform a task like shape-based matching on your transformed image, it is useful to scale the image such that
its content appears in a size similar to the original image.
Restriction: Scale > 0.

The mapping function is stored in the output image Map. Map has the same size as the resulting images after the
mapping. MapType is used to specify the type of the output Map. If ’nearest_neighbor’ is chosen, Map consists
of one image containing one channel, in which for each pixel of the resulting image the linearized coordinate
of the pixel of the input image is stored that is the nearest neighbor to the transformed coordinates. If ’bilinear’
interpolation is chosen, Map consists of one image containing five channels. In the first channel for each pixel in the
resulting image the linearized coordinates of the pixel in the input image is stored that is in the upper left position
relative to the transformed coordinates. The four other channels contain the weights of the four neighboring pixels
of the transformed coordinates which are used for the bilinear interpolation, in the following order:

2 3
4 5

The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates. If ’coord_map_sub_pix’ is chosen, Map consists of one vector field image of the semantic
type ’vector_field_absolute’, in which for each pixel of the resulting image the subpixel precise coordinates in the
input image are stored.
If several images have to be mapped using the same camera parameters, gen_image_to_world_plane_map
in combination with map_image is much more efficient than the operator image_to_world_plane because
the mapping function needs to be computed only once.

HALCON/HDevelop Reference Manual, 2024-11-13


6.9. RECTIFICATION 479

If you want to re-use the created map in another program, you can save it as a multi-channel image with the
operator write_image, using the format ’tiff’.
Parameters
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / int8 / uint2 / vector_field
Image containing the mapping data.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7
. WidthIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images to be transformed.
Restriction: WidthIn >= 1
. HeightIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the images to be transformed.
Restriction: HeightIn >= 1
. WidthMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .extent.x ; integer
Width of the resulting mapped images in pixels.
Restriction: WidthMapped >= 1
. HeightMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the resulting mapped images in pixels.
Restriction: HeightMapped >= 1
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or unit.
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6, 0.0254, 0.3048,
0.9144}
Restriction: Scale > 0
. MapType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the mapping.
Default: ’bilinear’
List of values: MapType ∈ {’nearest_neighbor’, ’bilinear’, ’coord_map_sub_pix’}
Example

* Calibrate camera.
calibrate_cameras (CalibDataID, Error)
* Obtain camera parameters.
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Example values, if no calibration data is available:
CamParam := ['area_scan_division', 0.0087, -1859, 8.65e-006, 8.6e-006, \
362.5, 291.6, 768, 576]
* Get reference pose (pose 4 of calibration object 0).
get_calib_data (CalibDataID, 'calib_obj_pose',\
[0,4], 'pose', Pose)
* Example values, if no calibration data is available:
Pose := [-0.11, -0.21, 2.51, 352.73, 346.73, 336.48, 0]
* Compensate thickness of plate.
set_origin_pose (Pose, -1.125, -1.0, 0, PoseNewOrigin)
* Transform the image into the world plane.
read_image (Image, 'calib/calib-3d-coord-04')
gen_image_to_world_plane_map (MapSingle, CamParam, PoseNewOrigin,\
CamParam[6], CamParam[7], 900, 800, 0.0025, 'bilinear')
map_image (Image, MapSingle, ImageMapped)

Result
gen_image_to_world_plane_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception is raised.

HALCON 24.11.1.0
480 CHAPTER 6 CALIBRATION

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Possible Successors
map_image
Alternatives
image_to_world_plane
See also
map_image, contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration

gen_radial_distortion_map ( : Map : CamParamIn, CamParamOut,


MapType : )

Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
gen_radial_distortion_map computes the mapping of images corresponding to a changing radial distor-
tion in accordance to the internal camera parameters CamParamIn and CamParamOut which can be obtained,
e.g., using the operator calibrate_cameras. CamParamIn and CamParamOut contain the old and the
new camera parameters including the old and the new radial distortion, respectively (also see Calibration for the
sequence of the parameters and the underlying camera model). Each pixel of the potential output image is trans-
formed into the image plane using CamParamOut and subsequently projected into a subpixel position of the
potential input image using CamParamIn. Note that gen_radial_distortion_map can only be used with
area scan cameras.
The mapping function is stored in the output image Map. The size of Map is given by the camera parameters
CamParamOut and therefore defines the size of the resulting mapped images using map_image. The size of
the images to be mapped with map_image is determined by the camera parameters CamParamIn. MapType is
used to specify the type of the output Map. If ’nearest_neighbor’ is chosen, Map consists of one image containing
one channel, in which for each pixel of the resulting image the linearized coordinate of the pixel of the input
image is stored that is the nearest neighbor to the transformed coordinates. If ’bilinear’ interpolation is chosen,
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image
the linearized coordinates of the pixel in the input image is stored that is in the upper left position relative to
the transformed coordinates. The four other channels contain the weights of the four neighboring pixels of the
transformed coordinates which are used for the bilinear interpolation, in the following order:

2 3
4 5

The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates. If ’coord_map_sub_pix’ is chosen, Map consists of one vector field image of the semantic
type ’vector_field_absolute’, in which for each pixel of the resulting image the subpixel precise coordinates in the
input image are stored.
If CamParamOut was computed via change_radial_distortion_cam_par, the mapping describes the
effect of a lens with a modified radial distortion κ. If κ = 0, the mapping corresponds to a rectification. A
subsequent pose estimation (determination of the external camera parameters) is not affected by this operation.
If several images have to be mapped using the same camera parameters, gen_radial_distortion_map
in combination with map_image is much more efficient than the operator
change_radial_distortion_image because the transformation must be computed only once.

HALCON/HDevelop Reference Manual, 2024-11-13


6.9. RECTIFICATION 481

If you want to re-use the created map in another program, you can save it as a multi-channel image with the
operator write_image, using the format ’tiff’.
Parameters
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / int8 / uint2 / vector_field
Image containing the mapping data.
. CamParamIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Old camera parameters.
. CamParamOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
New camera parameters.
. MapType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the mapping.
Default: ’bilinear’
List of values: MapType ∈ {’nearest_neighbor’, ’bilinear’, ’coord_map_sub_pix’}
Result
gen_radial_distortion_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
change_radial_distortion_cam_par, camera_calibration, hand_eye_calibration
Possible Successors
map_image
Alternatives
change_radial_distortion_image
See also
change_radial_distortion_contours_xld
Module
Calibration

image_points_to_world_plane ( : : CameraParam, WorldPose, Rows,


Cols, Scale : X, Y )

Transform image points into the plane z=0 of a world coordinate system.
The operator image_points_to_world_plane transforms image points which are given in Rows and Cols
into the plane z=0 in a world coordinate system and returns their 3D coordinates in X and Y. The world coordinate
system is chosen by passing its pose relative to the camera coordinate system in WorldPose. Hence, latter one is
expected in the form ccs Pwcs , where ccs denotes the camera coordinate system and wcs the world coordinate sys-
tem (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). In CameraParam you
must pass the internal camera parameters (see Calibration for the sequence of the parameters and the underlying
camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’um’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image contour points
in the camera coordinate system, taking into account the radial distortions. The line of sight is then transformed

HALCON 24.11.1.0
482 CHAPTER 6 CALIBRATION

into the world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the
3D coordinates X and Y are obtained.
It is recommended to use only those image points Rows and Cols, that lie within the calibrated image size. The
mathematical model does only work well for image points, that lie within the calibration range.
Parameters
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; real / integer
Row coordinates of the points to be transformed.
Default: 100.0
. Cols (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; real / integer
Column coordinates of the points to be transformed.
Default: 100.0
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or dimension
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6, 0.0254, 0.3048,
0.9144}
Restriction: Scale > 0
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; real
X coordinates of the points in the world coordinate system.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; real
Y coordinates of the points in the world coordinate system.
Example

* Perform camera calibration (with standard calibration plate).


calibrate_cameras (CalibDataID, Error)
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Get reference pose (pose 2 of calibration object 0).
get_calib_data (CalibDataID, 'calib_obj_pose',\
[0,2], 'pose', WorldPose)
* Compensate thickness of plate.
set_origin_pose(ObjInCameraPose, 0, 0, 0.0006, WorldPose)
* Transform image points into world coordinate system (unit mm).
image_points_to_world_plane(CamParam, WorldPose, PointRows, PointColumns, \
'mm', PointXCoord, PointYCoord)

Result
image_points_to_world_plane returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
See also
contour_to_world_plane_xld, project_3d_point
Module
Calibration

HALCON/HDevelop Reference Manual, 2024-11-13


6.9. RECTIFICATION 483

image_to_world_plane ( Image : ImageWorld : CameraParam, WorldPose,


Width, Height, Scale, Interpolation : )

Rectify an image by transforming it into the plane z=0 of a world coordinate system.
image_to_world_plane rectifies an image Image by transforming it into the plane z=0 (plane of mea-
surements) in a world coordinate system. The resulting rectified image ImageWorld shows neither radial nor
perspective distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly
onto the plane of measurements. The world coordinate system is chosen by passing its 3D pose relative to the
camera coordinate system in WorldPose. Hence, latter one is expected in the form ccs Pwcs , where ccs denotes
the camera coordinate system and wcs the world coordinate system (see Transformations / Poses and “Solution
Guide III-C - 3D Vision”). In CameraParam you must pass the internal camera parameters (see Cali-
bration for the sequence of the parameters and the underlying camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
The pixel position of the upper left corner of the output image ImageWorld is determined by the origin of the
world coordinate system. The size of the output image ImageWorld can be chosen by the parameters Width,
Height, and Scale. Width and Height must be given in pixels.
The parameter Scale can be used to specify the size of a pixel in the transformed image. There are two ways to
use this parameter:

Scale pixels to metric units:


Scale the image such that one pixel in the transformed image corresponds to a metric unit, e.g., setting
’mm’ determines that a pixel in the transformed image corresponds to the area 1mm × 1mm in the plane of
measurements. For this, the original unit needs to be meters. This is the case if you use a standard calibration
plate.
List of values: ’m’, ’cm’, ’mm’, ’microns’, ’um’.
Default: ’m’.
Control scaling manually:
Scale the image by giving a number that determines the ratio of original unit length / desired number of
pixels. E.g., if your original unit is meters and you want every pixel of your transformed image to represent
3mm × 3mm of the measuring plane, your scale is calculated Scale = 0.003/1 = 0.003. If you want to
perform a task like shape-based matching on your transformed image, it is useful to scale the image such that
its content appears in a size similar to the original image.
Restriction: Scale > 0.

The parameter Interpolation specifies, whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’nearest_neighbor’) should
be used.
If several images have to be rectified using the same parameters, gen_image_to_world_plane_map in
combination with map_image is much more efficient than the operator image_to_world_plane because
the mapping function needs to be computed only once.
Attention
image_to_world_plane can be executed on OpenCL devices if the input image does not exceed the maxi-
mum size of image objects of the selected device. There can be slight differences in the output compared to the
execution on the CPU.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / uint2 / real
Input image.
. ImageWorld (output_object) . . . . . . . . . . . . . . . . . .(multichannel-)image(-array) ; object : byte / uint2 / real
Transformed image.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7

HALCON 24.11.1.0
484 CHAPTER 6 CALIBRATION

. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer


Width of the resulting image in pixels.
Restriction: Width >= 1
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the resulting image in pixels.
Restriction: Height >= 1
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or unit.
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6, 0.0254, 0.3048,
0.9144}
Restriction: Scale > 0
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of interpolation.
Default: ’bilinear’
List of values: Interpolation ∈ {’nearest_neighbor’, ’bilinear’}
Example

* Calibrate camera.
calibrate_cameras (CalibDataID, Error)
* Obtain camera parameters.
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Example values, if no calibration data is available:
CamParam := ['area_scan_division', 0.0087, -1859, 8.65e-006, 8.6e-006, \
362.5, 291.6, 768, 576]
* Get reference pose (pose 4 of calibration object 0).
get_calib_data (CalibDataID, 'calib_obj_pose',\
[0,4], 'pose', Pose)
* Example values, if no calibration data is available:
Pose := [-0.11, -0.21, 2.51, 352.73, 346.73, 336.48, 0]
* Compensate thickness of plate.
set_origin_pose (Pose, -1.125, -1.0, 0, PoseNewOrigin)
* Transform the image into the world plane.
read_image (Image, 'calib/calib-3d-coord-04')
image_to_world_plane (Image, ImageWorld, CamParam, PoseNewOrigin,\
900, 800, 0.0025, 'bilinear')

Result
image_to_world_plane returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an ex-
ception is raised.
Execution Information

• Supports OpenCL compute devices.


• Multithreading type: reentrant (runs in parallel with non-exclusive operators).
• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on tuple level.
Possible Predecessors
create_pose, hom_mat3d_to_pose, camera_calibration, hand_eye_calibration,
set_origin_pose
Alternatives
gen_image_to_world_plane_map, map_image
See also
contour_to_world_plane_xld, image_points_to_world_plane
Module
Calibration

HALCON/HDevelop Reference Manual, 2024-11-13


6.10. SELF-CALIBRATION 485

6.10 Self-Calibration

radial_distortion_self_calibration (
Contours : SelectedContours : Width, Height, InlierThreshold,
RandSeed, DistortionModel, DistortionCenter,
PrincipalPointVar : CameraParam )

Calibrate the radial distortion.


radial_distortion_self_calibration estimates the distortion parameters and the distortion center of
a lens from a set of XLD Contours.
The distortion parameters are returned in CameraParam. Because no other parameters are estimated - particularly
not the focal length or the magnification - a telecentric camera model is returned with Magnification 1 and
scale factor 1 for Sx and Sy . See Calibration for more information on the different camera models.
Application
Based on the result of radial_distortion_self_calibration, you can remove lens distortions from
images by passing the parameter CameraParam, which contains the distortion parameters, to the operators
change_radial_distortion_cam_par and change_radial_distortion_image.
Basic principle
The estimation of the distortions is based on the assumption that a significant number of straight lines are visible
in the image. Because of lens distortions, these lines will be projected to curved contours. The operator now
determines suitable parameters by which the curved contours can be straightened again, thus compensating the
lens distortions.
Extract input contours
To get suitable input contours Contours, you can, e.g., use edges_sub_pix or lines_gauss.
The contours should be equally distributed and should lie near the image border because
there the degree of distortion is at its maximum and therefore the calibration is most sta-
ble. To improve speed and robustness, you can try to to obtain long linear or circu-
lar segments, e.g., with segment_contours_xld, union_collinear_contours_xld,
union_cocircular_contours_xld, or select_shape_xld. If a single image does not contain
enough straight contours in the scene, you can use the contours of multiple images (concat_obj).
Set parameters for contour selection
The operator automatically estimates those contours from Contours that are images of straight lines in the scene
using the robust RANSAC method. The contours that do not fulfill this condition and hence are not suited for the
calibration process are called outliers. The operator can cope with a maximum outlier percentage of 50 percent. A
contour is classified as an outlier if the mean deviation of the contour from its associated straight line is, after the
distortion correction, higher than a given threshold T .

m
1 X m
|dj | > InlierThreshold · =T
m j=1 100

The value InlierThreshold describes the mean deviation of a contour from its associated line in pixels for
a contour that contains 100 points. The actual threshold T is derived from InlierThreshold by scaling it
with the reference length (100) and the number of contour points m. Therefore, similar contours are classified
alike. Typical values of InlierThreshold range from 0.05 to 0.5. The higher the value, the more deviation
is tolerated. By choosing the value 0, all the contours of Contours are used for the calibration process. The
RANSAC contour selection will then be suppressed to enable a manual contour selection. This can be helpful if
the outlier percentage is higher than 50 percent.
With the parameter RandSeed, you can control the randomized behavior of the RANSAC algorithm and force
it to return reproducible results. The parameter is passed as initial value to the internally used random number
generator. If it is set to a positive value, the operator returns identical results for each call with identical parameter
values. The value set for the HALCON system variable ’seed_rand’ (see set_system) does not affect the results
of radial_distortion_self_calibration.
radial_distortion_self_calibration returns the contours that were chosen for the calibration pro-
cess in SelectedContours.

HALCON 24.11.1.0
486 CHAPTER 6 CALIBRATION

Select distortion model


The distortion model used in the calibration can be selected with the parameter DistortionModel. By choos-
ing the division model (DistortionModel = ’division’), the distortions are modeled by the distortion parame-
ter κ. By choosing the polynomial model (DistortionModel = ’polynomial’), the distortions are modeled by
the radial distortion parameters K1 , K2 , K3 and the decentering distortion parameters P1 , P2 . See Calibration for
details on the different camera models.
Set parameters for the distortion center estimation
The starting value for the estimation of the distortion center c = (cx , cy ) is the center of the image; the image size
is defined by Width and Height.
The distortion parameters (κ, cx , cy ) or (K1 , K2 , K3 , P1 , P2 , cx , cy ) , respectively, are estimated via the methods
’variable’, ’adaptive’, or ’fixed’, which are specified via the parameter DistortionCenter:

’variable’ In the default mode ’variable’, the distortion center c is estimated with all the other calibration pa-
rameters at the same time. Here, many contours should lie equally distributed near the image borders or the
distortion should be high. Otherwise, the search for the distortion center could be ill-posed, which results in
instability.
’adaptive’ With the method ’adaptive’, the distortion center c is at first fixed in the image center. Then, the outliers
are eliminated by using the InlierThreshold. Finally, the calibration process is rerun by estimating
(κ, cx , cy ) or (K1 , K2 , K3 , P1 , P2 , cx , cy ) , respectively, which will be accepted if c = (cx , cy ) results from
a stable calibration and lies near the image center. Otherwise, c will be assumed to lie in the image center.
This method should be used if the distortion center can be assumed to lie near the image center and if very
few contours are available or the position of other contours is bad (e.g., the contours have the same direction
or lie in the same image region).
’fixed’ By choosing the method ’fixed’, the distortion center will be assumed fixed in the image center and only
κ or (K1 , K2 , K3 , P1 , P2 ), respectively, will be estimated. This method should be used in case of very weak
distortions or few contours in bad position.

In order to control the deviation of c from the image center, the parameter PrincipalPointVar can be
used in the methods ’adaptive’ and ’variable’. If the deviation from the image center should be controlled,
PrincipalPointVar must lie between 1 and 100. The higher the value, the more the distortion center can
deviate from the image center. By choosing the value 0, the principal point is not controlled, i.e., the principal
point is determined solely based on the contours. The parameter PrincipalPointVar should be used in cases
of weak distortions or similarly oriented contours. Otherwise, a stable solution cannot be guaranteed.
Runtime
The runtime of radial_distortion_self_calibration is shortest for DistortionCenter =
0
variable 0 and PrincipalPointVar = 0 . The runtime for DistortionCenter = 0 variable 0 and
PrincipalPointVar > 0 increases significantly for smaller values of PrincipalPointVar. The run-
times for DistortionCenter = 0 adaptive 0 and DistortionCenter = 0 fixed 0 are also significantly higher
than for DistortionCenter = 0 variable 0 and PrincipalPointVar = 0 .
Attention
Since the polynomial model (DistortionModel = ’polynomial’) uses more parameters than the division model
(DistortionModel = ’division’) the calibration using the polynomial model can be slightly less stable than
the calibration using the division model, which becomes noticeable in the accuracy of the decentering distor-
tion parameters P1 , P2 . To improve the stability, contours of multiple images can be used. Additional sta-
bility can be achieved by setting DistortionCenter = 0 fixed 0 , DistortionCenter = 0 adaptive 0 , or
PrincipalPointVar > 0 , which was already mentioned above.
Parameters
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; object
Contours that are available for the calibration.
. SelectedContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; object
Contours that were used for the calibration
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images from which the contours were extracted.
Default: 640
Suggested values: Width ∈ {640, 768}
Restriction: Width > 0

HALCON/HDevelop Reference Manual, 2024-11-13


6.10. SELF-CALIBRATION 487

. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer


Height of the images from which the contours were extracted.
Default: 480
Suggested values: Height ∈ {480, 576}
Restriction: Height > 0
. InlierThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for the classification of outliers.
Default: 0.05
Suggested values: InlierThreshold ∈ {0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1}
Restriction: InlierThreshold >= 0
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed value for the random number generator.
Default: 42
. DistortionModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Determines the distortion model.
Default: ’division’
List of values: DistortionModel ∈ {’division’, ’polynomial’}
. DistortionCenter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Determines how the distortion center will be estimated.
Default: ’variable’
List of values: DistortionCenter ∈ {’fixed’, ’adaptive’, ’variable’}
. PrincipalPointVar (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Controls the deviation of the distortion center from the image center; larger values allow larger deviations
from the image center; 0 switches the penalty term off.
Default: 0.0
Suggested values: PrincipalPointVar ∈ {0.0, 5.0, 10.0, 20.0, 50.0, 100.0}
Restriction: PrincipalPointVar >= 0.0 && PrincipalPointVar <= 100.0
. CameraParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
Example

* Assume that GrayImage is one image in gray values with a


* resolution of 640 x 480 and a suitable number of contours. Then
* the following example performs the calibration using these
* contours and corrects the image with the estimated distortion
* parameters.
edges_sub_pix (GrayImage, Edges, 'canny', 1.0, 20, 40)
segment_contours_xld (Edges, ContoursSplit, 'lines_circles', 5, 8, 4)
radial_distortion_self_calibration (ContoursSplit, SelectedContours, \
640, 480, 0.08, 42, 'division', \
'variable', 0, CameraParam)
get_domain (GrayImage, Domain)
change_radial_distortion_cam_par ('fullsize', CameraParam, 0, CamParamOut)
change_radial_distortion_image (GrayImage, Domain, ImageRectified, \
CameraParam, CamParamOut)

Result
If the parameters are valid, the operator radial_distortion_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
488 CHAPTER 6 CALIBRATION

Possible Predecessors
edges_sub_pix, segment_contours_xld
Possible Successors
change_radial_distortion_cam_par, change_radial_distortion_image
See also
camera_calibration
References
T. Thormählen, H. Broszio: “Automatic line-based estimation of radial lens distortion”; in: Integrated Computer-
Aided Engineering; vol. 12; pp. 177-190; 2005.
Module
Calibration

radiometric_self_calibration ( Images : : ExposureRatios,


Features, FunctionType, Smoothness,
PolynomialDegree : InverseResponse )

Perform a radiometric self-calibration of a camera.


radiometric_self_calibration performs a radiometric self-calibration of a camera. For this, at least
two images that show the same image contents (scene) must be passed in Images. All images passed in Images
must be acquired with different exposures. Typically, the different exposures are obtained by changing the shutter
times at the camera. It is not recommended to change the exposure by changing the aperture of the lens since
in this case the exposures cannot be determined accurately enough. The ratio of the exposures of consecutive
images is passed in ExposureRatios. For example, a value of 0.5 specifies that the second image of an image
pair has been acquired with half the exposure of the first image of the pair. The exposure ratio can easily be
determined from the shutter times since the exposure is proportional to the shutter time. The exposure ratio must
be greater than 0 and smaller than 1. This means that the images must be sorted according to descending exposure.
ExposureRatios must contain one element less than the number of images passed in Images. If all exposure
ratios are identical, as a simplification a single value can be passed in ExposureRatios.
As described above, the images passed in Images must show identical image contents. Hence, it is typically nec-
essary that neither the camera nor the objects in the scene move. If the camera has rotated around the optical center,
the images should be aligned to a reference image (one of the images) using proj_match_points_ransac
and projective_trans_image. If the features used for the radiometric calibration are determined from the
2D gray value histogram of consecutive image pairs (Features = 0 2d _histogram 0 ), it is essential that the images
are aligned and that the objects in the scene do not move. For Features = 0 1d _histograms 0 , the features used
for the radiometric calibration are determined from the 1D gray value histograms of the image pairs. In this mode,
the calibration can theoretically be performed if the 1D histograms of the images do not change by the movement
of the objects in the images. This can, for example, be the case if an object moves in front of a uniformly textured
background. However, it is preferable to use Features = 0 2d _histogram 0 because this mode is more accurate.
The mode Features = 0 1d _histograms 0 should only be used if it is impossible to construct the camera set-up
such that neither the camera nor the objects in the scene move.
Furthermore, care should be taken to cover the range of gray values without gaps by choosing appropriate image
contents. Whether there are gaps in the range of gray values can easily be checked based on the 1D gray value
histograms of the images or the 2D gray value histograms of consecutive images. In the 1D gray value histograms
(see gray_histo_abs), there should be no areas between the minimum and maximum gray value that have a
frequency of 0 or a very small frequency. In the 2D gray value histograms (see histo_2dim), a single connected
region having the shape of a “strip” should result from a threshold operation with a lower threshold of 1. If more
than one connected component results, a more suitable image content should be chosen. If the image content can
be chosen such that the gray value range of the image (e.g., 0-255 for byte images) can be covered with two images
with different exposures, and if there are no gaps in the histograms, the two images suffice for the calibration. This,
however, is typically not the case, and hence multiple images must be used to cover the entire gray value range.
As described above, for this multiple images with different exposures must be taken to cover the entire gray value
range as well as possible. For this, normally the first image should be exposed such that the maximum gray value
is slightly below the saturation limit of the camera, or such that the image is significantly overexposed. If the first
image is overexposed, a significant overexposure is necessary to enable radiometric_self_calibration
to detect the overexposed areas reliably. If the camera exhibits an unusual saturation behavior (e.g., a saturation

HALCON/HDevelop Reference Manual, 2024-11-13


6.10. SELF-CALIBRATION 489

limit that lies significantly below the maximum gray value) the overexposed areas should be masked out by hand
with reduce_domain in the overexposed image.
radiometric_self_calibration returns the inverse gray value response function of the camera in
InverseResponse. The inverse response function can be used to create an image with a linear response
by using InverseResponse as the LUT in lut_trans. The parameter FunctionType determines which
function model is used to model the response function. For FunctionType = 0 discrete 0 , the response func-
tion is described by a discrete function with the relevant number of gray values (256 for byte images). For
FunctionType = 0 polynomial 0 , the response is described by a polynomial of degree PolynomialDegree.
The computation of the response function is slower for FunctionType = 0 discrete 0 . However, since a poly-
nomial tends to oscillate in the areas in which no gray value information can be derived, even if smoothness
constraints are imposed as described below, the discrete model should usually be preferred over the polynomial
model.
The inverse response function is returned as a tuple of integer values for FunctionType = 0 discrete 0 and
FunctionType = 0 polynomial 0 . In some applications, it might be desirable to return the inverse response func-
tion as floating point values to avoid the numerical error that is introduced by rounding. For example, if the inverse
response function must be inverted to obtain the response function of the camera, there is some loss of informa-
tion if the values are returned as integers. For these applications, FunctionType can be set to ’discrete_real’ or
’polynomial_real’, in which case the inverse response function will be returned as a tuple of floating point numbers.
The parameter Smoothness defines (in addition to the constraints on the response function that can be de-
rived from the images) constraints on the smoothness of the response function. If, as described above, the gray
value range can be covered completely and without gaps, the default value of 1 should not be changed. Other-
wise, values > 1 can be used to obtain a stronger smoothing of the response function, while values < 1 lead
to a weaker smoothing. The smoothing is particularly important in areas for which no gray value information
can be derived from the images, i.e., in gaps in the histograms and for gray values smaller than the minimum
gray value of all images or larger than the maximum gray value of all images. In these areas, the smoothness
constraints lead to an interpolation or extrapolation of the response function. Because of the nature of the inter-
nally derived constraints, FunctionType = 0 discrete 0 favors an exponential function in the undefined areas,
whereas FunctionType = 0 polynomial 0 favors a straight line. Please note that the interpolation and extrapo-
lation is always less reliable than to cover the gray value range completely and without gaps. Therefore, in any
case it should be attempted first to acquire the images optimally, before the smoothness constraints are used to
fill in the remaining gaps. In all cases, the response function should be checked for plausibility after the call to
radiometric_self_calibration. In particular, it should be checked whether InverseResponse is
monotonic. If this is not the case, a more suitable scene should be used to avoid interpolation, or Smoothness
should be set to a larger value. For FunctionType = 0 polynomial 0 , it may also be necessary to change
PolynomialDegree. If, despite these changes, an implausible response is returned, the saturation behavior
of the camera should be checked, e.g., based on the 2D gray value histogram, and the saturated areas should be
masked out by hand, as described above.
When the inverse gray value response function of the camera is determined, the absolute energy falling on the
camera cannot be determined. This means that InverseResponse can only be determined up to a scale factor.
Therefore, an additional constraint is used to fix the unknown scale factor: the maximum gray value that can occur
should occur for the maximum input gray value, e.g., InverseResponse[255] = 255 for byte images. This
constraint usually leads to the most intuitive results. If, however, a multichannel image (typically an RGB image)
should be radiometrically calibrated (for this, each channel must be calibrated separately), the above constraint
may lead to the result that a different scaling factor is determined for each channel. This may lead to the result that
gray tones no longer appear gray after the correction. In this case, a manual white balancing step must be carried
out by identifying a homogeneous gray area in the original image, and by deriving appropriate scaling factors from
the corrected gray values for two of the three response curves (or, in general, for n − 1 of the n channels). Here,
the response curve that remains invariant should be chosen such that all scaling factors are < 1. With the scaling
factors thus determined, new response functions should be calculated by multiplying each value of a response
function with the scaling factor corresponding to that response function.
Parameters

. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage-array ; object : byte / uint2


Input images.

HALCON 24.11.1.0
490 CHAPTER 6 CALIBRATION

. ExposureRatios (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real


Ratio of the exposure energies of successive image pairs.
Default: 0.5
Suggested values: ExposureRatios ∈ {0.25, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8}
Restriction: ExposureRatios > 0 && ExposureRatios < 1
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Features that are used to compute the inverse response function of the camera.
Default: ’2d_histogram’
List of values: Features ∈ {’2d_histogram’, ’1d_histograms’}
. FunctionType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the inverse response function of the camera.
Default: ’discrete’
List of values: FunctionType ∈ {’discrete’, ’polynomial’, ’discrete_real’, ’polynomial_real’}
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Smoothness of the inverse response function of the camera.
Default: 1.0
Suggested values: Smoothness ∈ {0.3, 0.5, 0.7, 0.8, 1.0, 1.2, 1.5, 2.0, 3.0}
Restriction: Smoothness > 0
. PolynomialDegree (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Degree of the polynomial if FunctionType = ’polynomial’.
Default: 5
Suggested values: PolynomialDegree ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction: PolynomialDegree >= 1 && PolynomialDegree <= 20
. InverseResponse (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / real
Inverse response function of the camera.
Example

open_framegrabber ('1394IIDC', 1, 1, 0, 0, 0, 0, 'default', -1, \


'default', -1, 'default', 'default', 'default', \
-1, -1, AcqHandle)
* Define appropriate shutter times.
Shutters := [1000,750,500,250,125]
Num := |Shutters|
* Grab and accumulate images with the different exposures. In this
* loop, it must be ensured that the scene remains static.
gen_empty_obj (Images)
for I := 0 to Num-1 by 1
set_framegrabber_param (AcqHandle, 'shutter', Shutters[I])
grab_image (Image, AcqHandle)
concat_obj (Images, Image, Images)
endfor
* Compute the exposure ratios from the shutter times.
ExposureRatios := real(Shutters[1:Num-1])/real(Shutters[0:Num-2])
radiometric_self_calibration (Images, ExposureRatios, '2d_histogram', \
'discrete', 1, 5, InverseResponse)
* Note that if the frame grabber supports hardware LUTs, we could
* also call set_framegrabber_lut here instead of lut_trans below.
* This would be more efficient.
while (1)
grab_image_async (Image, AcqHandle, -1)
lut_trans (Image, ImageLinear, InverseResponse)
* Process radiometrically correct image.
* [...]
endwhile
close_framegrabber (AcqHandle)

Result
If the parameters are valid, the operator radiometric_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.

HALCON/HDevelop Reference Manual, 2024-11-13


6.10. SELF-CALIBRATION 491

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
read_image, grab_image, grab_image_async, set_framegrabber_param, concat_obj,
proj_match_points_ransac, proj_match_points_ransac_guided,
projective_trans_image
Possible Successors
lut_trans
See also
histo_2dim, gray_histo, gray_histo_abs, reduce_domain
Module
Calibration

stationary_camera_self_calibration ( : : NumImages, ImageWidth,


ImageHeight, ReferenceImage, MappingSource, MappingDest,
HomMatrices2D, Rows1, Cols1, Rows2, Cols2, NumCorrespondences,
EstimationMethod, CameraModel, FixedCameraParams : CameraMatrices,
Kappa, RotationMatrices, X, Y, Z, Error )

Perform a self-calibration of a stationary projective camera.


stationary_camera_self_calibration performs a self-calibration of a stationary projective camera.
Here, stationary means that the camera may only rotate around the optical center and may zoom. Hence, the
optical center may not move. Projective means that the camera model is a pinhole camera that can be described by
a projective 3D-2D transformation. In particular, radial distortions can only be modeled for cameras with constant
parameters. If the lens exhibits significant radial distortions they should be removed, at least approximately, with
change_radial_distortion_image.
The camera model being used can be described as follows:

x = PX .

Here, x is a homogeneous 2D vector, X a homogeneous 3D vector, and P a homogeneous 3×4 projection matrix.
The projection matrix P can be decomposed as follows:

P = K(R|t) .

Here, R is a 3×3 rotation matrix and t is an inhomogeneous 3D vector. These two entities describe
the position (pose) of the camera in 3D space. This convention is analogous to the convention used in
camera_calibration, i.e., for R = I and t = 0 the x axis points to the right, the y axis downwards, and
the z axis points forward. K is the calibration matrix of the camera (the camera matrix) which can be described as
follows:
 
af sf u
K= 0 f v  .
0 0 1

Here, f is the focal length of the camera in pixels, a the aspect ratio of the pixels, s is a factor that models the
skew of the image axes, and (u, v) is the principal point of the camera in pixels. In this convention, the x axis
corresponds to the column axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that t = 0. With this convention, it is easy to see that the
fourth coordinate of the homogeneous 3D vector X has no influence on the position of the projected 3D point.

HALCON 24.11.1.0
492 CHAPTER 6 CALIBRATION

Consequently, the fourth coordinate can be set to 0, and it can be seen that X can be regarded as a point at infinity,
and hence represents a direction in 3D. With this convention, the fourth coordinate of X can be omitted, and hence
X can be regarded as inhomogeneous 3D vector which can only be determined up to scale since it represents a
direction. With this, the above projection equation can be written as follows:

x = KRX .

If two images of the same point are taken with a stationary camera, the following equations hold:

x1 = K1 R1 X
x2 = K2 R2 X

and consequently

x2 = K2 R2 R−1 −1 −1
1 K1 x1 = K2 R12 K1 x1 = H12 x1 .

If the camera parameters do not change when taking the two images, K1 = K2 holds. Because of the above, the
two images of the same 3D point are related by a projective 2D transformation. This transformation can be deter-
mined with proj_match_points_ransac. It needs to be taken into account that the order of the coordinates
of the projective 2D transformations in HALCON is the opposite of the above convention. Furthermore, it needs
to be taken into account that proj_match_points_ransac uses a coordinate system in which the origin
of a pixel lies in the upper left corner of the pixel, whereas stationary_camera_self_calibration
uses a coordinate system that corresponds to the definition used in camera_calibration, in which the
origin of a pixel lies in the center of the pixel. For projective 2D transformations that are determined with
proj_match_points_ransac the rows and columns must be exchanged and a translation of (0.5, 0.5) must
be applied. Hence, instead of H12 = K2 R12 K−1
1 the following equations hold in HALCON:

   
0 1 0.5 0 1 −0.5
H12 = 1 0 0.5  K2 R12 K−1
1
 1 0 −0.5 
0 0 1 0 0 1

and
   
0 1 −0.5 0 1 0.5
K2 R12 K−1
1 = 1 0 −0.5  H12  1 0 0.5  .
0 0 1 0 0 1

From the above equation, constraints on the camera parameters can be derived in two ways. First, the rotation can
be eliminated from the above equation, leading to equations that relate the camera matrices with the projective 2D
transformation between the two images. Let Hij be the projective transformation from image i to image j. Then,

Kj K>
j = Hij Ki K> >
i Hij

K−> −1
j Kj = H−> −> −1 −1
ij Ki Ki Hij

From the second equation, linear constraints on the camera parameters can be derived. This method is used for
EstimationMethod = ’linear’. Here, all source images i given by MappingSource and all destination
images j given by MappingDest are used to compute constraints on the camera parameters. After the camera
parameters have been determined from these constraints, the rotation of the camera in the respective images can
be determined based on the equation Rij = K−1 j Hij Ki and by constructing a chain of transformations from the
reference image ReferenceImage to the respective image. From the first equation above, a nonlinear method
to determine the camera parameters can be derived by minimizing the following error:
X 2
E= Kj K> > >
j − Hij Ki Ki Hij F
(i,j)∈{(s,d)}

HALCON/HDevelop Reference Manual, 2024-11-13


6.10. SELF-CALIBRATION 493

Here, analogously to the linear method, {(s, d)} is the set of overlapping images specified by MappingSource
and MappingDest. This method is used for EstimationMethod = ’nonlinear’. To start the minimization,
the camera parameters are initialized with the results of the linear method. These two methods are very fast and
return acceptable results if the projective 2D transformations Hij are sufficiently accurate. For this, it is essential
that the images do not have radial distortions. It can also be seen that in the above two methods the camera
parameters are determined independently from the rotation parameters, and consequently the possible constraints
are not fully exploited. In particular, it can be seen that it is not enforced that the projections of the same 3D
point lie close to each other in all images. Therefore, stationary_camera_self_calibration offers
a complete bundle adjustment as a third method (EstimationMethod = ’gold_standard’). Here, the camera
parameters and rotations as well as the directions in 3D corresponding to the image points (denoted by the vectors
X above), are determined in a single optimization by minimizing the following error:
n m
!
X X 2 1 2 2
E= kxij − Ki Ri Xj k + 2 (ui + vi )
i=1 j=1
σ

In this equation, only the terms for which the reconstructed direction Xj is visible in image i are taken into account.
The starting values for the parameters in the bundle adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle adjustment requires a significantly longer execution
time than the two simpler methods. Nevertheless, because the bundle adjustment results in significantly better
results, it should be preferred.
In each of the three methods the camera parameters that should be computed can be specified. The remaining
parameters are set to a constant value. Which parameters should be computed is determined with the parameter
CameraModel which contains a tuple of values. CameraModel must always contain the value ’focus’ that
specifies that the focal length f is computed. If CameraModel contains the value ’principal_point’ the principal
point (u, v) of the camera is computed. If not, the principal point is set to (ImageWidth/2, ImageHeight/2).
If CameraModel contains the value ’aspect’ the aspect ratio a of the pixels is determined, otherwise it is set to
1. If CameraModel contains the value ’skew’ the skew of the image axes is determined, otherwise it is set to
0. Only the following combinations of the parameters are allowed: ’focus’, [’focus’, ’principal_point’], [’focus’,
’aspect’], [’focus’, ’principal_point’, ’aspect’], and [’focus’, ’principal_point’, ’aspect’, ’skew’].
Additionally, it is possible to determine the parameter Kappa, which models radial lens distortions, if
EstimationMethod = ’gold_standard’ has been selected. In this case, ’kappa’ can also be included in the
parameter CameraModel. Kappa corresponds to the radial distortion parameter κ of the division model for lens
distortions (see camera_calibration).
When using EstimationMethod = ’gold_standard’ to determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by adding a sigma to the parameter ’principal_point:
0.5’. If no sigma is given the penalty term in the above equation for calculating the error is omitted.
The parameter FixedCameraParams determines whether the camera parameters can change in each im-
age or whether they should be assumed constant for all images. To calibrate a camera so that it can
later be used for measuring with the calibrated camera, only FixedCameraParams = ’true’ is use-
ful. The mode FixedCameraParams = ’false’ is mainly useful to compute spherical mosaics with
gen_spherical_mosaic if the camera zoomed or if the focus changed significantly when the mosaic images
were taken. If a mosaic with constant camera parameters should be computed, of course FixedCameraParams
= ’true’ should be used. It should be noted that for FixedCameraParams = ’false’ the camera calibration
problem is determined very badly, especially for long focal lengths. In these cases, often only the focal length can
be determined. Therefore, it may be necessary to use CameraModel = ’focus’ or to constrain the position of the
principal point by using a small Sigma for the penalty term for the principal point.
The number of images that are used for the calibration is passed in NumImages. Based on the number of images,
several constraints for the camera model must be observed. If only two images are used, even under the assumption
of constant parameters not all camera parameters can be determined. In this case, the skew of the image axes should
be set to 0 by not adding ’skew’ to CameraModel. If FixedCameraParams = ’false’ is used, the full set of
camera parameters can never be determined, no matter how many images are used. In this case, the skew should be
set to 0 as well. Furthermore, it should be noted that the aspect ratio can only be determined accurately if at least
one image is rotated around the optical axis (the z axis of the camera coordinate system) with respect to the other
images. If this is not the case the computation of the aspect ratio should be suppressed by not adding ’aspect’ to
CameraModel.
As described above, to calibrate the camera it is necessary that the projective transformation for each overlapping
image pair is determined with proj_match_points_ransac. For example, for a 2×2 block of images in the
following layout

HALCON 24.11.1.0
494 CHAPTER 6 CALIBRATION

1 2
3 4

the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 and 37→4. The indices of the images that determine the respective transformation are
given by MappingSource and MappingDest. The indices are start at 1. Consequently, in the above example
MappingSource = [1,1,1,2,2,3] and MappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by NumImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by ReferenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in
HomMatrices2D. Additionally, the coordinates of the matched point pairs in the image pairs must
be passed in Rows1, Cols1, Rows2, and Cols2. They can be determined from the output of
proj_match_points_ransac with tuple_select or with the HDevelop function subset. To enable
stationary_camera_self_calibration to determine which point pair belongs to which image pair,
NumCorrespondences must contain the number of found point matches for each image pair.
The computed camera matrices Ki are returned in CameraMatrices as 3 × 3 matrices. For
FixedCameraParams = ’false’, NumImages matrices are returned. Since for FixedCameraParams =
’true’ all camera matrices are identical, a single camera matrix is returned in this case. The computed rotations Ri
are returned in RotationMatrices as 3 × 3 matrices. RotationMatrices always contains NumImages
matrices.
If EstimationMethod = ’gold_standard’ is used, (X, Y, Z) contains the reconstructed directions Xj . In ad-
dition, Error contains the average projection error of the reconstructed directions. This can be used to check
whether the optimization has converged to useful values.
If the computed camera parameters are used to project 3D points or 3D directions into the image i the respective
camera matrix should be multiplied with the corresponding rotation matrix (with hom_mat2d_compose).
Parameters
. NumImages (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of different images that are used for the calibration.
Restriction: NumImages >= 2
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images from which the points were extracted.
Restriction: ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .extent.y ; integer
Height of the images from which the points were extracted.
Restriction: ImageHeight > 0
. ReferenceImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the reference image.
. MappingSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of the source images of the transformations.
. MappingDest (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of the target images of the transformations.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Array of 3 × 3 projective transformation matrices.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Row coordinates of corresponding points in the respective source images.
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Column coordinates of corresponding points in the respective source images.
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Row coordinates of corresponding points in the respective destination images.
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Column coordinates of corresponding points in the respective destination images.
. NumCorrespondences (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of point correspondences in the respective image pair.

HALCON/HDevelop Reference Manual, 2024-11-13


6.10. SELF-CALIBRATION 495

. EstimationMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string


Estimation algorithm for the calibration.
Default: ’gold_standard’
List of values: EstimationMethod ∈ {’linear’, ’nonlinear’, ’gold_standard’}
. CameraModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Camera model to be used.
Default: [’focus’,’principal_point’]
List of values: CameraModel ∈ {’focus’, ’aspect’, ’skew’, ’principal_point’, ’kappa’}
. FixedCameraParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Are the camera parameters identical for all images?
Default: ’true’
List of values: FixedCameraParams ∈ {’true’, ’false’}
. CameraMatrices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
(Array of) 3 × 3 projective camera matrices that determine the internal camera parameters.
. Kappa (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Radial distortion of the camera.
. RotationMatrices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Array of 3 × 3 transformation matrices that determine rotation of the camera in the respective image.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x-array ; real
X-Component of the direction vector of each point if EstimationMethod = ’gold_standard’ is used.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y-array ; real
Y-Component of the direction vector of each point if EstimationMethod = ’gold_standard’ is used.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.z-array ; real
Z-Component of the direction vector of each point if EstimationMethod = ’gold_standard’ is used.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Average error per reconstructed point if EstimationMethod = ’gold_standard’ is used.
Example

* Assume that Images contains four images in the layout given in the
* above description. Then the following example performs the camera
* self-calibration using these four images.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, ImageF, From[J])
select_obj (Images, ImageT, To[J])
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, 'gauss', 'true', \
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, 'gauss', 'true', \
RowsT, ColsT, _, _, _, _, _, _, _, _)
proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT, \
'ncc', 10, 0, 0, 480, 640, 0, 0.5, \
'gold_standard', 2, 42, HomMat2D, \
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor

HALCON 24.11.1.0
496 CHAPTER 6 CALIBRATION

stationary_camera_self_calibration (4, 640, 480, 1, From, To, \


HomMatrices2D, Rows1, Cols1, \
Rows2, Cols2, NumMatches, \
'gold_standard', \
['focus','principal_point'], \
'true', CameraMatrix, Kappa, \
RotationMatrices, X, Y, Z, Error)

Result
If the parameters are valid, the operator stationary_camera_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
proj_match_points_ransac, proj_match_points_ransac_guided
Possible Successors
gen_spherical_mosaic
See also
gen_projective_mosaic
References
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal
of Computer Vision; vol. 45, no. 2; pp. 107–127; 2001.
Module
Calibration

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 7

Classification

7.1 Gaussian Mixture Models

add_class_train_data_gmm ( : : GMMHandle,
ClassTrainDataHandle : )

Add training data to a Gaussian Mixture Model (GMM).


add_class_train_data_gmm adds the training data specified by ClassTrainDataHandle to a Gaus-
sian Mixture Model (GMM) specified by GMMHandle.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
Handle of a GMM which receives the training data.
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of training data for a classifier.
Result
If the parameters are valid, the operator add_class_train_data_gmm returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• GMMHandle

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_gmm, create_class_train_data
Possible Successors
get_sample_class_gmm
Alternatives
add_sample_class_gmm
See also
create_class_gmm
Module
Foundation

497
498 CHAPTER 7 CLASSIFICATION

add_sample_class_gmm ( : : GMMHandle, Features, ClassID,


Randomize : )

Add a training sample to the training data of a Gaussian Mixture Model.


add_sample_class_gmm adds a training sample to the Gaussian Mixture Model (GMM) given by
GMMHandle. The training sample is given by Features and ClassID. Features is the feature vector
of the sample, and consequently must be a real vector of length NumDim, as specified in create_class_gmm.
ClassID is the class of the sample, an integer between 0 and NumClasses-1 (set in create_class_gmm).
In the special case where the feature vectors are of integer type, they are lying in the feature space in a grid with
step width 1.0. For example, the RGB feature vectors typically used for color classification are triples having
integer values between 0 and 255 for each of their components. In fact, there might be even several feature vectors
representing the same point. When training a GMM with such data, the training algorithm may tend to align the
modeled Gaussians along linearly dependent lines or planes of data that are parallel to the grid dimensions. If
the number of Centers returned by train_class_gmm is unusually high, this indicates such a behavior of
the algorithm. The parameter Randomize can be used to handle such undesired effects. If Randomize > 0.0,
random Gaussian noise with mean 0 and standard deviation Randomize is added to each component of the
training data vectors, and the transformed training data is stored in the GMM. For values of Randomize ≤ 1.0,
the randomized data will look like small clouds around the grid points, which does not improve the properties of
the data cloud. For values of Randomize  2.0, the randomization might have a too strong influence on the
resulting GMM. For integer feature vectors, a value of Randomize between 1.5 and 2.0 is recommended, which
transforms the integer data into homogeneous clouds, without modifying its general form in the feature space. If
the data has been created from integer data by scaling, the same problem may occur. Here, Randomize must be
scaled with the same scale factor that was used to scale the original data.
Before the GMM can be trained with train_class_gmm, all training samples must be added to the GMM with
add_sample_class_gmm.
The number of currently stored training samples can be queried with get_sample_num_class_gmm. Stored
training samples can be read out again with get_sample_class_gmm.
Normally, it is useful to save the training samples in a file with write_samples_class_gmm to facilitate
reusing the samples, and to facilitate that, if necessary, new training samples can be added to the data set, and
hence to facilitate that a newly created GMM can be trained anew with the extended data set.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample to be stored.
. ClassID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Class of the training sample to be stored.
. Randomize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Standard deviation of the Gaussian noise added to the training data.
Default: 0.0
Suggested values: Randomize ∈ {0.0, 1.5, 2.0}
Restriction: Randomize >= 0.0
Result
If the parameters are valid, the operator add_sample_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

• GMMHandle

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 499

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm, write_samples_class_gmm
Alternatives
read_samples_class_gmm, add_samples_image_class_gmm
See also
clear_samples_class_gmm, get_sample_num_class_gmm, get_sample_class_gmm
Module
Foundation

classify_class_gmm ( : : GMMHandle, Features, Num : ClassID,


ClassProb, Density, KSigmaProb )

Calculate the class of a feature vector by a Gaussian Mixture Model.


classify_class_gmm computes the best Num classes of the feature vector Features with the Gaussian
Mixture Model (GMM) GMMHandle and returns the classes in ClassID and the corresponding probabili-
ties of the classes in ClassProb. Before calling classify_class_gmm, the GMM must be trained with
train_class_gmm.
classify_class_gmm corresponds to a call to evaluate_class_gmm and an additional step that extracts
the best Num classes. As described with evaluate_class_gmm, the output values of the GMM can be in-
terpreted as probabilities of the occurrence of the respective classes. However, here the posterior probability
ClassProb is further normalized as ClassProb = p(i|x)/p(x), where p(i|x) and p(x) are specified with
evaluate_class_gmm. In most cases it should be sufficient to use Num = 1 in order to decide whether the
probability of the best class is high enough. In some applications it may be interesting to also take the second best
class into account (Num = 2), particularly if it can be expected that the classes show a significant degree of overlap.
Density and KSigmaProb are explained with evaluate_class_gmm.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of best classes to determine.
Default: 1
Suggested values: Num ∈ {1, 2, 3, 4, 5}
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Result of classifying the feature vector with the GMM.
. ClassProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
A-posteriori probability of the classes.
. Density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Probability density of the feature vector.
. KSigmaProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Normalized k-sigma-probability for the feature vector.
Result
If the parameters are valid, the operator classify_class_gmm returns the value 2 (H_MSG_TRUE). If neces-
sary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON 24.11.1.0
500 CHAPTER 7 CLASSIFICATION

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
Alternatives
evaluate_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

clear_class_gmm ( : : GMMHandle : )

Clear a Gaussian Mixture Model.


clear_class_gmm clears the Gaussian Mixture Model (GMM) given by GMMHandle and frees all mem-
ory required for the GMM. After calling clear_class_gmm, the GMM can no longer be used. The handle
GMMHandle becomes invalid.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .class_gmm(-array) ; handle
GMM handle.
Result
If the parameters are valid, the operator clear_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

• GMMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm, read_class_gmm, write_class_gmm, train_class_gmm
Module
Foundation

clear_samples_class_gmm ( : : GMMHandle : )

Clear the training data of a Gaussian Mixture Model.

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 501

clear_samples_class_gmm clears all training samples that have been stored in the Gaussian Mixture
Model (GMM) GMMHandle. clear_samples_class_gmm should only be used if the GMM is trained
in the same process that uses the GMM for evaluation with evaluate_class_gmm or for classification
with classify_class_gmm. In this case, the memory required for the training samples can be freed
with clear_samples_class_gmm, and hence memory can be saved. In the normal usage, in which the
GMM is trained offline and written to a file with write_class_gmm, it is typically unnecessary to call
clear_samples_class_gmm because write_class_gmm does not save the training samples, and hence
the online process, which reads the GMM with read_class_gmm, requires no memory for the training samples.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .class_gmm(-array) ; handle
GMM handle.
Result
If the parameters are valid, the operator clear_samples_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• GMMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
train_class_gmm, write_samples_class_gmm
See also
create_class_gmm, clear_class_gmm, add_sample_class_gmm, read_samples_class_gmm
Module
Foundation

create_class_gmm ( : : NumDim, NumClasses, NumCenters, CovarType,


Preprocessing, NumComponents, RandSeed : GMMHandle )

Create a Gaussian Mixture Model for classification


create_class_gmm creates a Gaussian Mixture Model (GMM) for classification. NumDim specifies the num-
ber of dimensions of the feature space, NumClasses specifies the number of classes. A GMM consists of
NumCenters Gaussian centers per class. NumCenters can not only be the exact number of centers to be used,
but, depending on the number of parameters, can specify upper and lower bounds for the number of centers:

exactly one parameter: The parameter determines the exact number of centers to be used for all classes.
exactly two parameters: The first parameter determines the minimum number of centers, the second determines
the maximum number of centers for all classes.
exactly 2 · N umClasses parameters: Alternatingly every first parameter determines the minimum number of
centers per class and every second parameters determines the maximum number of centers per class.

When upper and lower bounds are specified, the optimum number of centers will be determined with the help of
the Minimum Message Length Criterion (MML). In general, we recommend to start the training with (too) many
centers as maximum and the expected number of centers as minimum.
Each center is described by the parameters center mj , covariance matrix Cj , and mixing coefficient Pj . These pa-
rameters are calculated from the training data by means of the Expectation Maximization (EM) algorithm. A GMM

HALCON 24.11.1.0
502 CHAPTER 7 CLASSIFICATION

can approximate an arbitrary probability density, provided that enough centers are being used. The covariance ma-
trices Cj have the dimensions NumDim · NumDim (NumComponents · NumComponents if preprocessing is
used) and are symmetric. Further constraints can be given by CovarType:
For CovarType = ’spherical’, Cj is a scalar multiple of the identity matrix Cj = s2j I. The center density
function p(x|j) is

2
1 kx − mj k
p(x|j) = exp(− )
2
(2πsj )d/2 2s2j

For CovarType = ’diag’, Cj is a diagonal matrix


Cj = diag(s2j,1 , ..., s2j,d ). The center density function p(x|j) is

d
1 X (xi − mj,i )2
p(x|j) = exp(− )
d 2s2j,i
s2j,i )d/2
Q
(2π i=1
i=1

For CovarType = ’full’, Cj is a positive definite matrix. The center density function p(x|j) is

1 1
p(x|j) = exp(− (x − mj )T C−1 (x − mj ))
1
(2π)d/2 |C j| 2 2

The complexity of the calculations increases from CovarType = ’spherical’ over CovarType = ’diag’ to
CovarType = ’full’. At the same time the flexibility of the centers increases. In general, ’spherical’ therefore
needs higher values for NumCenters than ’full’.
The procedure to use GMM is as follows: First, a GMM is created by create_class_gmm. Then,
training vectors are added by add_sample_class_gmm, afterwards they can be written to disk with
write_samples_class_gmm. With train_class_gmm the classifier center parameters (defined above)
are determined. Furthermore, they can be saved with write_class_gmm for later classifications.
From the mixing probabilities Pj and the center density function p(x|j), the probability density function p(x) can
be calculated by:

ncomp
X
p(x) = P (j)p(x|j)
j=1

The probability density function p(x) can be evaluated with evaluate_class_gmm for a feature vector x.
classify_class_gmm sorts the p(x) and therefore discovers the most probable class of the feature vector.
The parameters Preprocessing and NumComponents can be used to preprocess the training data and reduce
its dimensions. These parameters are explained in the description of the operator create_class_mlp.
create_class_gmm initializes the coordinates of the centers with random numbers. To ensure that the results of
training the classifier with train_class_gmm are reproducible, the seed value of the random number generator
is passed in RandSeed.
Parameters
. NumDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of dimensions of the feature space.
Default: 3
Suggested values: NumDim ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumDim >= 1
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes of the GMM.
Default: 5
Suggested values: NumClasses ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction: NumClasses >= 1

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 503

. NumCenters (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer


Number of centers per class.
Default: 1
Suggested values: NumCenters ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30}
Restriction: NumClasses >= 1
. CovarType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the covariance matrices.
Default: ’spherical’
List of values: CovarType ∈ {’spherical’, ’diag’, ’full’}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’normalization’
List of values: Preprocessing ∈ {’none’, ’normalization’, ’principal_components’, ’canonical_variates’}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default: 10
Suggested values: NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumComponents >= 1
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed value of the random number generator that is used to initialize the GMM with random values.
Default: 42
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
Example

* Classification with Gaussian Mixture Models


create_class_gmm (NumDim , NumClasses, [1,5], 'full', 'none',\
NumComponents, 42, GMMHandle)
* Add the training data
for J := 0 to NumData-1 by 1
* Features := [...]
* ClassID := [...]
add_sample_class_gmm (GMMHandle, Features, ClassID, Randomize)
endfor
* Train the GMM
train_class_gmm (GMMHandle, 100, 0.001, 'training', 0.0001, Centers, Iter)
* Classify unknown data in 'Features'
classify_class_gmm (GMMHandle, Features, 1, ID, Prob, Density, KSigmaProb)

Result
If the parameters are valid, the operator create_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
add_sample_class_gmm, add_samples_image_class_gmm
Alternatives
create_class_mlp, create_class_svm

HALCON 24.11.1.0
504 CHAPTER 7 CLASSIFICATION

See also
clear_class_gmm, train_class_gmm, classify_class_gmm, evaluate_class_gmm,
classify_image_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

deserialize_class_gmm ( : : SerializedItemHandle : GMMHandle )

Deserialize a serialized Gaussian Mixture Model.


deserialize_class_gmm deserializes a Gaussian Mixture Model (GMM) (including its training sam-
ples), that was serialized by serialize_class_gmm (see fwrite_serialized_item for an introduc-
tion of the basic principle of serialization). The serialized Gaussian Mixture Model is defined by the handle
SerializedItemHandle. The deserialized values are stored in an automatically created Gaussian Mixture
Model with the handle GMMHandle.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
Result
If the parameters are valid, the operator deserialize_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_class_gmm
Possible Successors
classify_class_gmm, evaluate_class_gmm, create_class_lut_gmm
See also
create_class_gmm, write_class_gmm, serialize_class_gmm
Module
Foundation

evaluate_class_gmm ( : : GMMHandle, Features : ClassProb,


Density, KSigmaProb )

Evaluate a feature vector by a Gaussian Mixture Model.


evaluate_class_gmm computes three different probability values for a feature vector Features with the
Gaussian Mixture Model (GMM) GMMHandle.
The a-posteriori probability of class i for the sample Features(x) is computed as
ncomp
X
p(i|x) = P (j)p(x|j)
j=1

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 505

and returned for each class in ClassProb. The formulas for the calculation of the center density function p(x|j)
are described with create_class_gmm.
The probability density of the feature vector is computed as a sum of the posterior class probabilities

nclasses
X
p(x) = P r(i)p(i|x)
i=1

and is returned in Density. Here, P r(i) are the prior classes probabilities as computed by train_class_gmm.
Density can be used for novelty detection, i.e., to reject feature vectors that do not belong to any of the trained
classes. However, since Density depends on the scaling of the feature vectors and since Density is a probabil-
ity density, and consequently does not need to lie between 0 and 1, the novelty detection can typically be performed
more easily with KSigmaProb (see below).
A k-sigma error ellipsoid is defined as a locus of points for which

(x − µ)T C −1 (x − µ) = k 2

In the one dimensional case this is the interval [µ − kσ, µ + kσ]. For any 1D Gaussian distribution, it is true that
approximately 68% of the occurrences of the random variable are within this range for k = 1, approximately 95%
for k = 2, approximately 99% for k = 3, etc. This probability is called k-sigma probability and is denoted by
P [k]. P [k] can be computed numerically for univariate as well as for multivariate Gaussian distributions, where it
should be noted that for the same values of k, P (N ) [k] > P (N +1) [k] (here N and (N+1) denote dimensions). For
Gaussian mixture models the k-sigma probability is computed as:

ncomp
X
PGM M [x] = P (j)Pj [kj ]
j=1

where

kj2 = (x − µj )T Cj−1 (x − µj )

. PGM M [k] are weighted with the class priors and then normalized. The maximum value of all classes is returned
in KSigmaProb, such that

1
KSigmaProb = max (P r(i)PGM M [x])
Prmax

KSigmaProb can be used for novelty detection, as it indicates how well a feature vector fits into the distribution
of the class it is assigned to. Typically, feature vectors having values below 0.0001 should be rejected. Note that
the rejection threshold defined by the parameter RejectionThreshold in classify_image_class_gmm
refers to the KSigmaProb values.
Before calling evaluate_class_gmm, the GMM must be trained with train_class_gmm.
The position of the maximum value of ClassProb is usually interpreted as the class of the feature vector and the
corresponding value as the probability of the class. In this case, classify_class_gmm should be used instead
of evaluate_class_gmm, because classify_class_gmm directly returns the class and corresponding
probability.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. ClassProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
A-posteriori probability of the classes.
. Density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; real
Probability density of the feature vector.

HALCON 24.11.1.0
506 CHAPTER 7 CLASSIFICATION

. KSigmaProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real


Normalized k-sigma-probability for the feature vector.
Result
If the parameters are valid, the operator evaluate_class_gmm returns the value 2 (H_MSG_TRUE). If neces-
sary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_gmm, read_class_gmm
Alternatives
classify_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

get_class_train_data_gmm ( : : GMMHandle : ClassTrainDataHandle )

Get the training data of a Gaussian Mixture Model (GMM).


get_class_train_data_gmm gets the training data of a Gaussian Mixture Model (GMM) and returns it in
ClassTrainDataHandle.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
Handle of a GMM that contains training data.
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data of the classifier.
Result
If the parameters are valid, the operator get_class_train_data_gmm returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
add_class_train_data_svm, add_class_train_data_svm, add_class_train_data_knn
See also
create_class_train_data

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 507

Module
Foundation

get_params_class_gmm ( : : GMMHandle : NumDim, NumClasses,


MinCenters, MaxCenters, CovarType )

Return the parameters of a Gaussian Mixture Model.


get_params_class_gmm returns the parameters of a Gaussian Mixture Model (GMM) that were specified
when the GMM was created with create_class_gmm. This is particularly useful if the GMM was read with
read_class_gmm. The output of get_params_class_gmm can, for example, be used to check whether
the feature vectors and/or the target data to be used have appropriate dimensions to be used with GMM. For a
description of the parameters, see create_class_gmm.
Parameters

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle


GMM handle.
. NumDim (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of dimensions of the feature space.
. NumClasses (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes of the GMM.
. MinCenters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Minimum number of centers per GMM class.
. MaxCenters (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Maximum number of centers per GMM class.
. CovarType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the covariance matrices.
Result
If the parameters are valid, the operator get_params_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_class_gmm, read_class_gmm
Possible Successors
add_sample_class_gmm, train_class_gmm
See also
evaluate_class_gmm, classify_class_gmm
Module
Foundation

get_prep_info_class_gmm ( : : GMMHandle,
Preprocessing : InformationCont, CumInformationCont )

Compute the information content of the preprocessed feature vectors of a GMM.


get_prep_info_class_gmm computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_mlp.
The information content is derived from the variations of the transformed components of the feature vector, i.e., it

HALCON 24.11.1.0
508 CHAPTER 7 CLASSIFICATION

is computed solely based on the training data, independent of any error rate on the training data. The information
content is computed for all relevant components of the transformed feature vectors (NumComponents for ’princi-
pal_components’ and ’canonical_variates’, see create_class_gmm), and is returned in InformationCont
as a number between 0 and 1. To convert the information content into a percentage, it simply needs to be mul-
tiplied by 100. The cumulative information content of the first n components is returned in the n-th compo-
nent of CumInformationCont, i.e., CumInformationCont contains the sums of the first n elements of
InformationCont. To use get_prep_info_class_gmm, a sufficient number of samples must be added
to the GMM given by GMMHandle by using add_sample_class_gmm or read_samples_class_gmm.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_gmm. The call to get_prep_info_class_gmm al-
ready requires the creation of a GMM, and hence the setting of NumComponents in create_class_gmm
to an initial value. However, if get_prep_info_class_gmm is called, it is typically not known how many
components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step
approach should typically be used to select NumComponents: In a first step, a GMM with the maximum num-
ber for NumComponents is created (NumComponents for ’principal_components’ and ’canonical_variates’).
Then, the training samples are added to the GMM and are saved in a file using write_samples_class_gmm.
Subsequently, get_prep_info_class_gmm is used to determine the information content of the compo-
nents, and with this NumComponents. After this, a new GMM with the desired number of components is
created, and the training samples are read with read_samples_class_gmm. Finally, the GMM is trained with
train_class_gmm.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’principal_components’
List of values: Preprocessing ∈ {’principal_components’, ’canonical_variates’}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Cumulative information content of the transformed feature vectors.
Example

* Create the initial GMM


create_class_gmm (NumDim, NumClasses, NumCenters, 'full',\
'principal_components', NumComponents, 42, GMMHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
* Generate training features and classes
* Data = [...]
* ClassID = [...]
add_sample_class_gmm (GMMHandle, Data, ClassID, Randomize)
endfor
write_samples_class_gmm (GMMHandle, 'samples.gtf')
* Compute the information content of the transformed features
get_prep_info_class_gmm (GMMHandle, 'principal_components',\
InformationCont, CumInformationCont)
* Determine Comp by inspecting InformationCont and CumInformationCont
* NumComponents = [...]
* Create the actual GMM
create_class_gmm (NumDim, NumClasses, NumCenters, 'full',\
'principal_components', NumComponents, 42, GMMHandle)
* Train the GMM
read_samples_class_gmm (GMMHandle, 'samples.gtf')

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 509

train_class_gmm (GMMHandle, 200, 0.0001, 0.0001, Regularize, Centers, Iter)


write_class_gmm (GMMHandle, 'classifier.gmm')

Result
If the parameters are valid, the operator get_prep_info_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
get_prep_info_class_gmm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
clear_class_gmm, create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

get_sample_class_gmm ( : : GMMHandle, NumSample : Features,


ClassID )

Return a training sample from the training data of a Gaussian Mixture Models (GMM).
get_sample_class_gmm reads out a training sample from the Gaussian Mixture Model (GMM) given
by GMMHandle that was stored with add_sample_class_gmm or add_samples_image_class_gmm.
The index of the sample is specified with NumSample. The index is counted from 0, i.e., NumSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_gmm. The training sample is returned in Features and ClassID. Features
is a feature vector of length NumDim, while ClassID is its class (see add_sample_class_gmm and
create_class_gmm).
get_sample_class_gmm can, for example, be used to reclassify the training data with
classify_class_gmm in order to determine which training samples, if any, are classified incorrectly.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. NumSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Class of the training sample.
Example

create_class_gmm (2, 2, [1,10], 'spherical', 'none', 2, 42, GMMHandle)


read_samples_class_gmm (GMMHandle, 'samples.gsf')
train_class_gmm (GMMHandle, 100, 1e-4, 'training', 1e-4, Centers, Iter)

HALCON 24.11.1.0
510 CHAPTER 7 CLASSIFICATION

* Reclassify the training samples


get_sample_num_class_gmm (GMMHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_gmm (GMMHandle, I, Features, Class)
classify_class_gmm (GMMHandle, Features, 2, ClassID, ClassProb,\
Density, KSigmaProb)
if (not (Class == ClassProb[0]))
* classified incorrectly
endif
endfor

Result
If the parameters are valid, the operator get_sample_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm,
get_sample_num_class_gmm
Possible Successors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm
Module
Foundation

get_sample_num_class_gmm ( : : GMMHandle : NumSamples )

Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
get_sample_num_class_gmm returns in NumSamples the number of training samples that are stored in the
Gaussian Mixture Model (GMM) given by GMMHandle. get_sample_num_class_gmm should be called
before the individual training samples are read out with get_sample_class_gmm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_gmm).
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If the parameters are valid, the operator get_sample_num_class_gmm returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 511

Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm
Possible Successors
get_sample_class_gmm
See also
create_class_gmm
Module
Foundation

read_class_gmm ( : : FileName : GMMHandle )

Read a Gaussian Mixture Model from a file.


read_class_gmm reads a Gaussian Mixture Model (GMM) that has been stored with write_class_gmm.
Since the training of an GMM can consume a relatively long time, the GMM is typically trained in an of-
fline process and written to a file with write_class_gmm. In the online process the GMM is read with
read_class_gmm and subsequently used for evaluation with evaluate_class_gmm or for classification
with classify_class_gmm. The default HALCON file extension for the GMM classifier is ’ggc’.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
File extension: .ggc
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
Result
If the parameters are valid, the operator read_class_gmm returns the value 2 (H_MSG_TRUE). If necessary an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
classify_class_gmm, evaluate_class_gmm, create_class_lut_gmm
See also
create_class_gmm, write_class_gmm
Module
Foundation

read_samples_class_gmm ( : : GMMHandle, FileName : )

Read the training data of a Gaussian Mixture Model from a file.


read_samples_class_gmm reads training samples from the file given by FileName and adds them to the
training samples that have already been stored in the Gaussian Mixture Model (GMM) given by GMMHandle.
The GMM must be created with create_class_gmm before calling read_samples_class_gmm. As
described with train_class_gmm and write_samples_class_gmm, read_samples_class_gmm,
add_sample_class_gmm, and write_samples_class_gmm can be used to build up a database of train-
ing samples, and hence to improve the performance of the GMM by retraining the GMM with extended data
sets.

HALCON 24.11.1.0
512 CHAPTER 7 CLASSIFICATION

It should be noted that the training samples must have the correct dimensionality. The feature vectors stored in
FileName must have the lengths NumDim that was specified with create_class_gmm, and enough classes
must have been created in create_class_gmm. If this is not the case, an error message is returned.
It is possible to read files of samples that were written with write_samples_class_svm or
write_samples_class_mlp.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
Result
If the parameters are valid, the operator read_samples_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• GMMHandle

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm
Alternatives
add_sample_class_gmm
See also
write_samples_class_gmm, write_samples_class_mlp, clear_samples_class_gmm
Module
Foundation

select_feature_set_gmm ( : : ClassTrainDataHandle, SelectionMethod,


GenParamName, GenParamValue : GMMHandle, SelectedFeatureIndices,
Score )

Selects an optimal combination from a set of features to classify the provided data.
select_feature_set_gmm selects an optimal subset from a set of features to solve a given clas-
sification problem. The classification problem has to be specified with annotated training data in
ClassTrainDataHandle and will be classified by a Gaussian Mixture Model. Details of the properties of
this classifier can be found in create_class_gmm.
The result of the operator is a trained classifier that is returned in GMMHandle. Additionally, the list of indices or
names of the selected features is returned in SelectedFeatureIndices. To use this classifier, calculate for
new input data all features mentioned in SelectedFeatureIndices and pass them to the classifier.
A possible application of this operator can be a comparison of different parameter sets for certain feature extraction
techniques. Another application is to search for a feature that is discriminating between different classes.
To define the features that should be selected from ClassTrainDataHandle, the dimensions
of the feature vectors in ClassTrainDataHandle can be grouped into subfeatures by calling

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 513

set_feature_lengths_class_train_data. A subfeature can contain several subsequent elements of


a feature vector. select_feature_set_gmm decides for each of these subfeatures, if it is better to use it for
the classification or leave it out.
The indices of the selected subfeatures are returned in SelectedFeatureIndices. If names were set
in set_feature_lengths_class_train_data, these names are returned instead of the indices. If
set_feature_lengths_class_train_data was not called for ClassTrainDataHandle before,
each element of the feature vector is considered as a subfeature.
The selection method SelectionMethod is either a greedy search ’greedy’ (iteratively add the feature with
highest gain) or the dynamically oscillating search ’greedy_oscillating’ (add the feature with highest gain and test
then if any of the already added features can be left out without great loss). The method ’greedy’ is generally
preferable, since it is faster. Only in cases when the subfeatures are low-dimensional or redundant, the method
’greedy_oscillating’ should be chosen.
The optimization criterion is the classification rate of a two-fold cross-validation of the training data. The best
achieved value is returned in Score.
The following generic parameters can be set in GenParamName and GenParamValue:

’min_centers’: Minimal number of clusters to represent a class in the training data.


Suggested values: ’1’, ’2’
Default: ’1’
’max_center’: Maximal number of clusters to represent a class in the training data.
Suggested values: ’1’, ’5’, ’10’
Default: ’1’
’covar_type’: Type of the covariance to represent the size of a cluster.
List of values: ’spherical’, ’diag’, ’full’
Default: ’spherical’
’random_seed’: Random seed.
Default: ’42’
’threshold’: Training threshold.
Default: ’0.001’
’regularize’: Regularization value.
Default: ’0.0001’
’randomize’: Randomize the input vector.
Default: ’0’
’class_priors’: Mode to determine the a-priori probabilities of the classes.
List of values: ’training’, ’uniform’
Default: ’training’

A more exact description of those parameters can be found in create_class_gmm and train_class_gmm.
Attention
This operator may take considerable time, depending on the size of the data set in the training file, and the number
of features.
Please note, that this operator should not be called, if only a small set of training data is available. Due to the risk of
overfitting the operator select_feature_set_gmm may deliver a classifier with a very high score. However,
the classifier may perform poorly when tested.
Parameters

. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle


Handle of the training data.
. SelectionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to perform the selection.
Default: ’greedy’
List of values: SelectionMethod ∈ {’greedy’, ’greedy_oscillating’}

HALCON 24.11.1.0
514 CHAPTER 7 CLASSIFICATION

. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string


Names of generic parameters to configure the classifier.
Default: []
List of values: GenParamName ∈ {’min_centers’, ’max_center’, ’covar_type’, ’random_seed’, ’threshold’,
’regularize’, ’randomize’, ’class_priors’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of generic parameters to configure the classifier.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3, ’spherical’, ’diag’, ’full’, 42, 0.001, 0.0001, 0}
. GMMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
A trained GMM classifier using only the selected features.
. SelectedFeatureIndices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
The selected feature set, contains indices or names.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The achieved score using two-fold cross-validation.
Example

* Find out which of the two features distinguishes two Classes


NameFeature1 := 'Good Feature'
NameFeature2 := 'Bad Feature'
LengthFeature1 := 3
LengthFeature2 := 2
* Create training data
create_class_train_data (LengthFeature1+LengthFeature2,\
ClassTrainDataHandle)
* Define the features which are in the training data
set_feature_lengths_class_train_data (ClassTrainDataHandle, [LengthFeature1,\
LengthFeature2], [NameFeature1, NameFeature2])
* Add training data
* |Feat1| |Feat2|
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 2,1 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 2,1 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 3,4 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 3,4 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [0,0,1, 5,6 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,3,2, 5,6 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [0,0,1, 5,6 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,3,2, 5,6 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [0,0,1, 5,6 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,3,2, 5,6 ], 1)
* Add more data
* ...
* Select the better feature with a GMM
select_feature_set_gmm (ClassTrainDataHandle, 'greedy', [], [], GMMHandle,\
SelectedFeatureGMM, Score)
* Use the classifier
* ...

Result
If the parameters are valid, the operator select_feature_set_gmm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 515

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_class_train_data, add_sample_class_train_data,
set_feature_lengths_class_train_data
Possible Successors
classify_class_gmm
Alternatives
select_feature_set_mlp, select_feature_set_knn, select_feature_set_svm
See also
create_class_gmm, gray_features, region_features
Module
Foundation

serialize_class_gmm ( : : GMMHandle : SerializedItemHandle )

Serialize a Gaussian Mixture Model (GMM).


serialize_class_gmm serializes a Gaussian Mixture Model (GMM) and its stored training samples (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data that is
written in a file by write_class_gmm and write_samples_class_gmm is converted to a serialized item.
The Gaussian Mixture Model is defined by the handle GMMHandle. The serialized Gaussian Mixture Model is
returned by the handle SerializedItemHandle and can be deserialized by deserialize_class_gmm.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_class_gmm returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_gmm
Possible Successors
clear_class_gmm, fwrite_serialized_item, send_serialized_item,
deserialize_class_gmm
See also
create_class_gmm, read_class_gmm, write_samples_class_gmm,
deserialize_class_gmm
Module
Foundation

train_class_gmm ( : : GMMHandle, MaxIter, Threshold, ClassPriors,


Regularize : Centers, Iter )

Train a Gaussian Mixture Model.

HALCON 24.11.1.0
516 CHAPTER 7 CLASSIFICATION

train_class_gmm trains the Gaussian Mixture Model (GMM) referenced by GMMHandle. Before the
GMM can be trained, all training samples to be used for the training must be stored in the GMM using
add_sample_class_gmm, add_samples_image_class_gmm, or read_samples_class_gmm. Af-
ter the training, new training samples can be added to the GMM and the GMM can be trained again.
During the training, the error that results from the GMM applied to the training vectors will be minimized with the
expectation maximization (EM) algorithm.
MaxIter specifies the maximum number of iterations per class for the EM algorithm. In practice, values between
20 and 200 should be sufficient for most problems. Threshold specifies a threshold for the relative changes
of the error. If the relative change in error exceeds the threshold after MaxIter iterations, the algorithm will be
canceled for this class. Because the algorithm starts with the maximum specified number of centers (parameter
NumCenters in create_class_gmm), in case of a premature termination the number of centers and the error
for this class will not be optimal. In this case, a new training with different parameters (e.g., another value for
RandSeed in create_class_gmm) can be tried.
ClassPriors specifies the method of calculation of the class priors in GMM. If ’training’ is specified, the
priors of the classes are taken from the proportion of the corresponding sample data during training. If ’uniform’
is specified, the priors are set equal to 1/NumClasses for all classes.
Regularize is used to regularize (nearly) singular covariance matrices during the training. A covariance matrix
might collapse to singularity if it is trained with linearly dependent data. To avoid this, a small value specified by
Regularize is added to each main diagonal element of the covariance matrix, which prevents this element from
becoming smaller than Regularize. A recommended value for Regularize is 0.0001. If Regularize is
set to 0.0, no regularization is performed.
The centers are initially randomly distributed. In individual cases, relatively high errors will result from the al-
gorithm because the initial random values determined by RandSeed in create_class_gmm lead to local
minima. In this case, a new GMM with a different value for RandSeed should be generated to test whether a
significantly smaller error can be obtained.
It should be noted that, depending on the number of centers, the type of covariance matrix, and the number of
training samples, the training can take from a few seconds to several hours.
On output, train_class_gmm returns in Centers the number of centers per class that have been
found to be optimal by the EM algorithm. These values can be used as a reference in NumCenters (in
create_class_gmm) for future GMMs. If the number of centers found by training a new GMM on integer
training data is unexpectedly high, this might be corrected by adding a Randomize noise to the training data in
add_sample_class_gmm. Iter contains the number of performed iterations per class. If a value in Iter
equals MaxIter, the training algorithm has been terminated prematurely (see above).
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. MaxIter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum number of iterations of the expectation maximization algorithm
Default: 100
Suggested values: MaxIter ∈ {10, 20, 30, 50, 100, 200}
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for relative change of the error for the expectation maximization algorithm to terminate.
Default: 0.001
Suggested values: Threshold ∈ {0.001, 0.0001}
Restriction: Threshold >= 0.0 && Threshold <= 1.0
. ClassPriors (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode to determine the a-priori probabilities of the classes
Default: ’training’
List of values: ClassPriors ∈ {’training’, ’uniform’}
. Regularize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Regularization value for preventing covariance matrix singularity.
Default: 0.0001
Restriction: Regularize >= 0.0 && Regularize < 1.0
. Centers (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of found centers per class

HALCON/HDevelop Reference Manual, 2024-11-13


7.1. GAUSSIAN MIXTURE MODELS 517

. Iter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer


Number of executed iterations per class
Example

create_class_gmm (NumDim, NumClasses, [1,5], 'full', 'none', 0, 42,\


GMMHandle)
* Add the training data
read_samples_class_gmm (GMMHandle, 'samples.gsf')
* Train the GMM
train_class_gmm (GMMHandle, 100, 1e-4, 'training', 1e-4, Centers, Iter)
* Write the Gaussian Mixture Model to file
write_class_gmm (GMMHandle, 'gmmclassifier.gmm')

Result
If the parameters are valid, the operator train_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator modifies the state of the following input parameter:
• GMMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_sample_class_gmm, read_samples_class_gmm
Possible Successors
evaluate_class_gmm, classify_class_gmm, write_class_gmm, create_class_lut_gmm
Alternatives
read_class_gmm
See also
create_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation

write_class_gmm ( : : GMMHandle, FileName : )

Write a Gaussian Mixture Model to a file.


write_class_gmm writes the Gaussian Mixture Model (GMM) GMMHandle to the file given by FileName.
The default HALCON file extension for the GMM classifier is ’ggc’. write_class_gmm is typically called
after the GMM has been trained with train_class_gmm. The GMM can be read with read_class_gmm.
write_class_gmm does not write any training samples that possibly have been stored in the GMM. For this
purpose, write_samples_class_gmm should be used.

HALCON 24.11.1.0
518 CHAPTER 7 CLASSIFICATION

Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .ggc
Result
If the parameters are valid, the operator write_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_gmm
Possible Successors
clear_class_gmm
See also
create_class_gmm, read_class_gmm, write_samples_class_gmm
Module
Foundation

write_samples_class_gmm ( : : GMMHandle, FileName : )

Write the training data of a Gaussian Mixture Model to a file.


write_samples_class_gmm writes the training samples stored in the Gaussian Mixture Model (GMM)
GMMHandle to the file given by FileName. write_samples_class_gmm can be used to build up a
database of training samples, and hence to improve the performance of the GMM by training it with an extended
data set (see train_class_gmm).
The file FileName is overwritten by write_samples_class_gmm. Nevertheless, extending the database
of training samples is easy because read_samples_class_gmm and add_sample_class_gmm add the
training samples to the training samples that are already stored in memory with the GMM.
The created file can be read with read_samples_class_mlp if the classifier of a multilayer perceptron (MLP)
should be used. The class of a training sample in the GMM corresponds to a component of the target vector in the
MLP being 1.0.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
Result
If the parameters are valid, the operator write_samples_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


7.2. K-NEAREST NEIGHBORS 519

Possible Predecessors
add_sample_class_gmm
Possible Successors
clear_samples_class_gmm
See also
create_class_gmm, read_samples_class_gmm, read_samples_class_mlp,
write_samples_class_mlp
Module
Foundation

7.2 K-Nearest Neighbors

add_class_train_data_knn ( : : KNNHandle,
ClassTrainDataHandle : )

Add training data to a k-nearest neighbors (k-NN) classifier.


add_class_train_data_knn adds the training data specified by ClassTrainDataHandle to a k-
nearest neighbors (k-NN) classifier specified by KNNHandle.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of a k-NN which receives the training data.
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Training data for a classifier.
Result
If the parameters are valid, the operator add_class_train_data_knn returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• KNNHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_knn, create_class_train_data
Possible Successors
get_sample_class_knn
Alternatives
add_sample_class_knn
See also
create_class_knn
Module
Foundation

add_sample_class_knn ( : : KNNHandle, Features, ClassID : )

Add a sample to a k-nearest neighbors (k-NN) classifier.

HALCON 24.11.1.0
520 CHAPTER 7 CLASSIFICATION

add_sample_class_knn adds a feature vector to a k-nearest neighbors (k-NN) data structure. The length of
a feature vector was specified in create_class_knn by NumDim. A handle to a k-NN data structure has to be
specified in KNNHandle.
The feature vectors are collected in Features. The length of the input vector must be a multiple of NumDim.
Each feature vector needs a class which can be given by ClassID, if only one was specified, the class is used for
all vectors. The class is a natural number greater or equal to 0. If only one class is used, the class has to be 0. In
case the operator classify_image_class_knn will be used, all numbers starting from 0 to the number of
classes-1 should be used, since otherwise an empty region will be generated for each unused number.
It is allowed to add samples to an already trained k-NN classificator. The new data is only integrated after another
call to train_class_knn.
If the k-NN classifier has been trained with automatic feature normalization enabled, the supplied fea-
tures Features are interpreted as unnormalized and are normalized as it was defined by the last call to
train_class_knn. Please see train_class_knn for more information on normalization.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
List of features to add.
. ClassID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Class IDs of the features.
Result
If the parameters are valid, the operator add_sample_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• KNNHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
train_class_knn, read_class_knn
See also
create_class_knn, read_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

classify_class_knn ( : : KNNHandle, Features : Result, Rating )

Search for the next neighbors for a given feature vector.


classify_class_knn searches for the next ’k’ neighbors of the feature vector given in Features. The
distance which is used to determine the next neighbor is the L2-norm of the given vector and the training samples.
The value of ’k’ can be set via set_params_class_knn. The results can either be the determined class of
the feature vector or the indices of the nearest neighbors. The selection of the result behavior can be made by
set_params_class_knn via the generic parameters ’method’ and ’max_num_classes’:

HALCON/HDevelop Reference Manual, 2024-11-13


7.2. K-NEAREST NEIGHBORS 521

’classes_distance’: returns the nearest samples for each of maximally ’max_num_classes’ different classes, if they
have a representative in the nearest ’k’ neighbors. The results in Result are classes sorted by their minimal
distance in Rating. There is no efficient way to determine in a k-NN-tree the nearest neighbor for exactly
’max_num_classes’ classes.
’classes_frequency’: counts the occurrences of certain classes among the nearest ’k’ neighbors and returns the
occurring classes in Result sorted by their relative frequency that is returned in Rating. Again, maximally
’max_num_classes’ values are returned.
’classes_weighted_frequencies’: counts the occurrences of certain classes among the nearest ’k’ neighbors and
returns the occurring classes in Result sorted by their relative frequency weighted with the average distance
that is returned in Rating. Again, maximally ’max_num_classes’ values are returned.
’neighbors_distance’: returns the indices of the nearest ’k’ neighbors in Result and the distances in Rating.

The default behavior is ’classes_distance’ and returns the classes and distances.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Features that should be classified.
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
The classification result, either class IDs or sample indices.
. Rating (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
A rating for the results. This value contains either a distance, a frequency or a weighted frequency.
Result
If the parameters are valid, the operator classify_class_knn returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.

Possible Predecessors
train_class_knn, read_class_knn, set_params_class_knn
Possible Successors
clear_class_knn
See also
create_class_knn, read_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

clear_class_knn ( : : KNNHandle : )

Clear a k-NN classifier.


clear_class_knn clears the k-NN classifiers given in KNNHandle. After calling clear_class_knn,
KNNHandle becomes invalid.

HALCON 24.11.1.0
522 CHAPTER 7 CLASSIFICATION

Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
Result
If the parameters are valid, the operator clear_class_knn returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• KNNHandle

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
train_class_knn, read_class_knn
See also
create_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

create_class_knn ( : : NumDim : KNNHandle )

Create a k-nearest neighbors (k-NN) classifier.


create_class_knn creates a k-nearest neighbors (k-NN) data structure. This can be either used to classify
data or to approximately locate nearest neighbors in a NumDim-dimensional space.
Most of the operators described in Classification/K-Nearest-Neighbor use the resulting handle KNNHandle.
The k-NN classifies by searching approximately the nearest neighbors and returning their classes as result. With
the used approximation, the search time is logarithmically to the number of samples and dimensions.
The dimension of the feature vectors is the only parameter that necessarily has to be set in NumDim.
Parameters
. NumDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Number of dimensions of the feature.
Default: 10
. KNNHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
Result
If the parameters are valid, the operator create_class_knn returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON/HDevelop Reference Manual, 2024-11-13


7.2. K-NEAREST NEIGHBORS 523

• Processed without parallelization.


This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
add_sample_class_knn, train_class_knn
Alternatives
create_class_svm, create_class_mlp
See also
select_feature_set_knn, read_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

deserialize_class_knn ( : : SerializedItemHandle : KNNHandle )

Deserialize a serialized k-NN classifier.


deserialize_class_knn deserializes a k-NN classifier (k-NN) (including its training samples), that was
serialized by serialize_class_knn (see fwrite_serialized_item for an introduction of the basic
principle of serialization). The serialized k-NN classifier is defined by the handle SerializedItemHandle.
The deserialized values are stored in an automatically created k-NN classifier with the handle KNNHandle.
Parameters

. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle


Handle of the serialized item.
. KNNHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
Result
If the parameters are valid, the operator deserialize_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_class_knn
Possible Successors
classify_class_knn
Alternatives
serialize_class_knn
See also
create_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

HALCON 24.11.1.0
524 CHAPTER 7 CLASSIFICATION

get_class_train_data_knn ( : : KNNHandle : ClassTrainDataHandle )

Get the training data of a k-nearest neighbors (k-NN) classifier.


get_class_train_data_knn gets the training data of a k-nearest neighbors (k-NN) classifier and returns it
in ClassTrainDataHandle.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier that contains training data.
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data of the classifier.
Result
If the parameters are valid, the operator get_class_train_data_knn returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
add_sample_class_knn
Possible Successors
add_class_train_data_svm, add_class_train_data_gmm, add_class_train_data_knn
See also
create_class_train_data
Module
Foundation

get_params_class_knn ( : : KNNHandle,
GenParamName : GenParamValue )

Get parameters of a k-NN classification.


get_params_class_knn gets parameters of the k-NN referred by KNNHandle. The possible entries in
GenParamName are:

’method’: Retrieve the currently selected method for determining the result of classify_class_knn. The re-
sult can be ’classes_distance’, ’classes_frequency’, ’classes_weighted_frequencies’ or ’neighbors_distance’.
’k’: The number of nearest neighbors that is considered to determine the results.
’max_num_classes’: The maximum number of classes that are returned. This parameter is ignored in case the
method ’neighbors_distance’ is selected.
’num_checks’: Defines the maximum number of runs through the trees.
’epsilon’: A parameter to lower the accuracy in the tree to gain speed.

Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the parameters that can be read from the k-NN classifier.
Default: [’method’,’k’]
List of values: GenParamName ∈ {’method’, ’num_checks’, ’epsilon’, ’k’}

HALCON/HDevelop Reference Manual, 2024-11-13


7.2. K-NEAREST NEIGHBORS 525

. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / real / string


Values of the selected parameters.
Result
If the parameters are valid, the operator get_params_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_knn, read_class_knn
Possible Successors
classify_class_knn
See also
create_class_knn, read_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

get_sample_class_knn ( : : KNNHandle, IndexSample : Features,


ClassID )

Return a training sample from the training data of a k-nearest neighbors (k-NN) classifier.
get_sample_class_knn reads a training sample from the k-nearest neighbors (k-NN) classifier given by
KNNHandle that was added with add_sample_class_knn or read_class_knn. The index of the sample
is specified with IndexSample. The index is counted from 0, i.e., IndexSample must be a number between
0 and NumSamples −1, where NumSamples can be determined with get_sample_num_class_knn. The
training sample is returned in Features and ClassID. Features is a feature vector of length NumDim (see
create_class_knn), while ClassID is the class label, which is a number between 0 and the number of
classes.
Parameters

. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle


Handle of the k-NN classifier.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Class of the training sample.
Result
If the parameters are valid the operator get_sample_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
526 CHAPTER 7 CLASSIFICATION

Possible Predecessors
add_sample_class_train_data
See also
create_class_knn
Module
Foundation

get_sample_num_class_knn ( : : KNNHandle : NumSamples )

Return the number of training samples stored in the training data of a k-nearest neighbors (k-NN) classifier.
get_sample_num_class_knn returns in NumSamples the number of training samples that are stored in the
k-nearest neighbors (k-NN) classifier given by KNNHandle. get_sample_num_class_knn should be called
before the individual training samples are accessed with get_sample_class_knn.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If KNNHandle is valid, the operator get_sample_num_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_knn
Possible Successors
get_sample_class_knn
See also
create_class_knn
Module
Foundation

read_class_knn ( : : FileName : KNNHandle )

Read the k-NN classifier from a file.


read_class_knn reads the saved classifier from the file FileName (see write_class_knn). The values
of the current classifier are overwritten. The default HALCON file extension for the k-NN classifier is ’gnc’.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the classifier.
File extension: .gnc
. KNNHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.

HALCON/HDevelop Reference Manual, 2024-11-13


7.2. K-NEAREST NEIGHBORS 527

Result
read_class_knn returns 2 (H_MSG_TRUE). An exception is raised if it was not possible to open the file
FileName or the file has the wrong format.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
classify_class_knn
See also
create_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

select_feature_set_knn ( : : ClassTrainDataHandle, SelectionMethod,


GenParamName, GenParamValue : KNNHandle, SelectedFeatureIndices,
Score )

Selects an optimal subset from a set of features to solve a certain classification problem.
select_feature_set_knn selects an optimal subset from a set of features to solve a certain clas-
sification problem. The classification problem has to be specified with annotated training data in
ClassTrainDataHandle and will be classified by a a k-nearest neighbors classifier. Details of the proper-
ties of this classifier can be found in create_class_knn.
The result of the operator is a trained classifier that is returned in KNNHandle. Additionally, the list of indices or
names of the selected features is returned in SelectedFeatureIndices. To use this classifier, calculate for
new input data all features mentioned in SelectedFeatureIndices and pass them to the classifier.
A possible application of this operator can be a comparison of different parameter sets for certain feature extraction
techniques. Another application is to search for a property that is discriminating between different classes of parts
or classes of errors.
To define the features that should be selected from ClassTrainDataHandle, the dimensions
of the feature vectors in ClassTrainDataHandle can be grouped into subfeatures by calling
set_feature_lengths_class_train_data. A subfeature can contain several subsequent elements of
a feature vector. The operator decides for each of these subfeatures, if it is better to use it for the classification or
leave it out.
The indices of the selected subfeatures are returned in SelectedFeatureIndices. If names were set
in set_feature_lengths_class_train_data, these names are returned instead of the indices. If
set_feature_lengths_class_train_data was not called for ClassTrainDataHandle before,
each element of the feature vector is considered as a subfeature.
The selection method SelectionMethod is either a greedy search ’greedy’ (iteratively add the feature with
highest gain) or the dynamically oscillating search ’greedy_oscillating’ (add the feature with highest gain and test
then if any of the already added features can be left out without great loss). The method ’greedy’ is generally
preferable, since it is faster. Only in cases when the subfeatures are low-dimensional or redundant, the method
’greedy_oscillating’ should be chosen.
The optimization criterion is the classification rate of a two-fold cross-validation of the training data. The best
achieved value is returned in Score.
The k-NN classifier can be parameterized using the following values in GenParamName and GenParamValue:

HALCON 24.11.1.0
528 CHAPTER 7 CLASSIFICATION

’num_neighbors’: The number of minimally evaluated nodes, increase this value for high dimensional data.
Suggested values: ’1’, ’2’, ’5’, ’10’
Default: ’1’
’num_trees’: Number of search trees in the k-NN classifier
Suggested values: ’1’, ’4’, ’10’
Default: ’4’

Attention
This operator may take considerable time, depending on the size of the data set in the training file, and the number
of features.
Please note, that this operator should not be called, if only a small set of training data is available. Due to the risk of
overfitting the operator select_feature_set_knn may deliver a classifier with a very high score. However,
the classifier may perform poorly when tested.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. SelectionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to perform the selection.
Default: ’greedy’
List of values: SelectionMethod ∈ {’greedy’, ’greedy_oscillating’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of generic parameters to configure the selection process and the classifier.
Default: []
List of values: GenParamName ∈ {’num_neighbors’, ’num_trees’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of generic parameters to configure the selection process and the classifier.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3}
. KNNHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
A trained k-NN classifier using only the selected features.
. SelectedFeatureIndices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
The selected feature set, contains indices or names.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The achieved score using two-fold cross-validation.
Example

* Find out which of the two features distinguishes two Classes


NameFeature1 := 'Good Feature'
NameFeature2 := 'Bad Feature'
LengthFeature1 := 3
LengthFeature2 := 2
* Create training data
create_class_train_data (LengthFeature1+LengthFeature2,\
ClassTrainDataHandle)
* Define the features which are in the training data
set_feature_lengths_class_train_data (ClassTrainDataHandle, [LengthFeature1,\
LengthFeature2], [NameFeature1, NameFeature2])
* Add training data
* |Feat1| |Feat2|
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 2,1 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 2,1 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 3,4 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 3,4 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [0,0,1, 5,6 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,3,2, 5,6 ], 1)
* Add more data

HALCON/HDevelop Reference Manual, 2024-11-13


7.2. K-NEAREST NEIGHBORS 529

* ...
* Select the better feature with the k-NN classifier
select_feature_set_knn (ClassTrainDataHandle, 'greedy', [], [], KNNHandle,\
SelectedFeatureKNN, Score)
* Use the classifier
* ...

Result
If the parameters are valid, the operator select_feature_set_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_class_train_data, add_sample_class_train_data,
set_feature_lengths_class_train_data
Possible Successors
classify_class_knn
Alternatives
select_feature_set_mlp, select_feature_set_svm, select_feature_set_gmm
See also
select_feature_set_trainf_knn, gray_features, region_features
Module
Foundation

serialize_class_knn ( : : KNNHandle : SerializedItemHandle )

Serialize a k-NN classifier.


serialize_class_knn serializes a k-NN classifier (k-NN) and its stored training samples (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data that
is written in a file by write_class_knn is converted to a serialized item. The k-NN classifier is defined by the
handle KNNHandle. The serialized k-NN classifier is returned by the handle SerializedItemHandle and
can be deserialized by deserialize_class_knn.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_class_knn returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
530 CHAPTER 7 CLASSIFICATION

Possible Predecessors
train_class_knn, read_class_knn
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_class_knn
See also
create_class_knn, read_class_knn, deserialize_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

set_params_class_knn ( : : KNNHandle, GenParamName,


GenParamValue : )

Set parameters for k-NN classification.


set_params_class_knn sets parameters for the classification of the k-nearest neighbors (k-NN) classifier
KNNHandle. It controls the behavior of classify_class_knn.
The value of ’k’ can be set via GenParamName and GenParamValue. Increasing ’k’ also increases the accuracy
of the resulting neighbors and increases the run time.
The results can either be the determined class of the feature vector or the indices of the nearest neighbors.
The result behavior can be selected with set_params_class_knn via the generic parameters ’method’ and
’max_num_classes’:
’classes_distance’: returns the nearest samples for each of maximally ’max_num_classes’ different classes, if they
have a representative in the nearest ’k’ neighbors. The results are classes sorted by their minimal distance.
There is no efficient way to determine in a k-NN-tree the nearest neighbor for exactly ’max_num_classes’
classes.
’classes_frequency’: counts the occurrences of certain classes among the nearest ’k’ neighbors and returns the oc-
current classes sorted by their relative frequency that is returned, too. Again, maximally ’max_num_classes’
values are returned.
’classes_weighted_frequencies’: counts the occurrences of certain classes among the nearest ’k’ neighbors and
returns the occurrent classes sorted by their relative frequency weighted with the average distance that is
returned, too. Again, maximally ’max_num_classes’ values are returned.
’neighbors_distance’: returns the indices of the nearest ’k’ neighbors and the distances.
The default behavior is ’classes_distance’.
The option ’num_checks’ allows to set the number of maximal runs through the trees. The parameter has to be
positive and the default value is 32. The higher this value is, the more accurate the results will be. As a trade-off,
the running time will also be higher. Setting this parameter to 0 triggers an exact search.
The option ’epsilon’ allows to set a stop criteria if the value is increased from the default value 0.0. The higher
the value is set, the less accurate results of the estimated neighbors can be expected, while it might speed up the
search.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters that can be adjusted for the k-NN classifier.
Default: [’method’,’k’,’max_num_classes’]
List of values: GenParamName ∈ {’method’, ’num_checks’, ’epsilon’, ’k’, ’max_num_classes’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / real / string
Values of the generic parameters that can be adjusted for the k-NN classifier.
Default: [’classes_distance’,5,1]
Suggested values: GenParamValue ∈ {’classes_distance’, ’classes_frequency’,
’classes_weighted_frequencies’, ’neighbors_distance’, 32, 0.0, 0.02, 0, 1, 2, 3, 4, 5, 6}

HALCON/HDevelop Reference Manual, 2024-11-13


7.2. K-NEAREST NEIGHBORS 531

Result
If the parameters are valid, the operator set_params_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• KNNHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
train_class_knn, read_class_knn
Possible Successors
classify_class_knn
See also
create_class_knn, read_class_knn, get_params_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

train_class_knn ( : : KNNHandle, GenParamName, GenParamValue : )

Creates the search trees for a k-NN classifier.


train_class_knn creates the search trees for a k-NN classifier.
It is possible to set the number of trees via the parameters GenParamName and GenParamValue by
’num_trees’. The default value for the number of search trees is 4. A higher number of trees improves the ac-
curacy of the search, but increases the run time.
It is possible to add more samples after training using the operator add_sample_class_knn. The added data
affects the classification only, if train_class_knn is called again.
Automatic feature normalization can be activated by setting ’normalization’ in GenParamName and ’true’ in
GenParamValue. The feature vectors are normalized by normalizing each dimension separately. For each
dimension, the mean and standard deviation is calculated over the training samples. Every feature vector is nor-
malized by subtracting the mean and dividing by the standard deviation of the individual dimension. This results
in a normalization, where each dimension has zero mean and unit variance. If the standard deviation happens to be
zero, only the mean is subtracted. Please note however, that a feature dimension with no standard deviation does
not change the classification result and should be removed. Automatic feature normalization will change the stored
training data, but the original data can be restored at any time by calling train_class_knn with ’normaliza-
tion’ set to ’false’. If normalization is used, the operator classify_class_knn interprets the input data as
unnormalized and performs normalization internally as it has been defined in the last call to train_class_knn.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters that can be adjusted for the k-NN classifier creation.
Default: []
List of values: GenParamName ∈ {’num_trees’, ’normalization’}

HALCON 24.11.1.0
532 CHAPTER 7 CLASSIFICATION

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / string / real


Values of the generic parameters that can be adjusted for the k-NN classifier creation.
Default: []
Suggested values: GenParamValue ∈ {4, ’false’, ’true’}
Result
If the parameters are valid, the operator train_class_knn returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• KNNHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_sample_class_knn, read_class_knn
Alternatives
select_feature_set_knn
See also
create_class_knn, read_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

write_class_knn ( : : KNNHandle, FileName : )

Save the k-NN classifier in a file.


write_class_knn writes the k-NN classifier KNNHandle to the file given by FileName. The classifier
can be read again with read_class_knn. Since the samples are an intrinsic component of a k-NN-classifier,
the operator write_class_knn saves them within the class file. In contrast to other classifiers like SVM,
there is no operator for saving the samples separately. The samples can be retrieved from a k-NN-classifier using
get_sample_class_knn. The default HALCON file extension for the k-NN classifier is ’gnc’.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the file in which the classifier will be written.
File extension: .gnc
Result
write_class_knn returns 2 (H_MSG_TRUE). An exception is raised if it was not possible to open file
FileName.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON/HDevelop Reference Manual, 2024-11-13


7.3. LOOK-UP TABLE 533

• Processed without parallelization.


Possible Predecessors
train_class_knn, read_class_knn
See also
create_class_knn, read_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation

7.3 Look-Up Table

clear_class_lut ( : : ClassLUTHandle : )

Clear a look-up table classifier.


clear_class_lut clears the look-up table (LUT) given by ClassLUTHandle and frees all memory re-
quired for the LUT. After calling clear_class_lut, the LUT classifier can no longer be used. The handle
ClassLUTHandle becomes invalid.
Parameters
. ClassLUTHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut(-array) ; handle
Handle of the LUT classifier.
Result
If ClassLUTHandle is valid, the operator clear_class_lut returns the value 2 (H_MSG_TRUE). If neces-
sary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
classify_image_class_lut
See also
create_class_lut_mlp, create_class_lut_svm, create_class_lut_gmm
Module
Foundation

create_class_lut_gmm ( : : GMMHandle, GenParamName,


GenParamValue : ClassLUTHandle )

Create a look-up table using a gaussian mixture model to classify byte images.
create_class_lut_gmm generates a look-up table (LUT) ClassLUTHandle using the data of a trained
gaussian mixture model (GMM) GMMHandle to classify multi-channel byte images. By using this GMM-based
LUT classifier the operator classify_image_class_gmm of the subsequent classification can be replaced by
the operator classify_image_class_lut. The classification gets a major speed-up, because the estimation
of the class in every image point is no longer necessary since every possible response of the GMM is stored in the
LUT. For the generation of the LUT, the parameters NumDim, Preprocessing, and NumComponents defined
in the earlier called operator create_class_gmm are important. In NumDim, the number of image channels
the images must have to be classified is defined. By using the Preprocessing (see create_class_gmm)

HALCON 24.11.1.0
534 CHAPTER 7 CLASSIFICATION

the number of image channels can be transformed to NumComponents. NumComponents defines the length
of the feature vector, which the classifier classify_class_gmm handles internally. Because of perfor-
mance and disk space, the LUT is restricted to be maximal 3-dimensional. Since it replaces the operator
classify_class_gmm, NumComponents ≤ 3 must hold. If there is no preprocessing that reduces the num-
ber of image channels (NumDim = NumComponents), all possible pixel values, which can occur in a byte
image, are classified with classify_class_gmm. The returned classes are stored in the LUT. If there is
a preprocessing that reduces the number of image channels (NumDim > NumComponents), the preprocess-
ing parameters of the GMM are stored in a separate structure of the LUT. To create the LUT, all transformed
pixel values are classified with classify_class_gmm. The returned classes are stored in the LUT. Because
of the discretization of the LUT, the accuracy of the LUT classifier could become lower than the accuracy of
classify_image_class_gmm. With ’bit_depth’ and ’class_selection’ the accuracy of the classification, the
required storage, and the runtime needed to create the LUT can be controlled.
The following parameters of the GMM-based LUT classifier can be set with GenParamName and
GenParamValue:

’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result
in a lower accuracy for the classification. One of these clusters contains 2N umComponents·(8−bit_depth)
pixel combinations, where NumComponents denotes the dimension of the LUT, which is specified in
create_class_gmm. For example, for ’bit_depth’ = 7, NumComponents = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most 2N umComponents·bit_depth+2
bytes of storage. For example, for NumComponents = 3, ’bit_depth’ = 8 and NumClasses < 16 (spec-
ified in create_class_gmm), the LUT requires 8 MB of storage with internal storage optimization. If
NumClasses = 1, the LUT requires only 2 MB of storage by using the full bit depth of the LUT. The
runtime for the classification in classify_image_class_lut becomes minimal if the LUT fits into
the cache. Suggested values: 6,7,8
Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’class_selection’: Method for the class selection for the LUT. Can be modified to control the accuracy and the
runtime needed to create the LUT classifier. The value in ’class_selection’ is ignored if the bit depth of the
LUT is maximal, thus ’bit_depth’ = 8 holds. If the bit depth of the LUT is smaller (’bit_depth’ < 8), the
classes of multiple pixel combinations will be mapped to the same LUT entry. One of these clusters contains
2N umComponents·(8−bit_depth) pixel combinations, where NumComponents denotes the dimension of the
LUT, which is specified in create_class_gmm. By choosing ’class_selection’ = ’best’, the class that
appears most often in the cluster is stored in the LUT. For ’class_selection’ = ’fast’, only one pixel of the
cluster, i.e., the pixel with the smallest value (component-wise), is classified. The returned class is stored in
the LUT. In this case, the accuracy of the subsequent classification could become lower. On the other hand,
the runtime needed to create the LUT can be reduced, which is proportional to the maximal needed storage
of the LUT, which is defined with 2N umComponents·bit_depth+2 .
List of values: ’fast’, ’best’
Default: ’fast’
’rejection_threshold’: Threshold for the rejection of uncertain classified points of the GMM. The param-
eter represents a threshold on the K-sigma probability measure returned by the classification (see
classify_class_gmm and evaluate_class_gmm). All pixels having a probability below ’rejec-
tion_threshold’ are not assigned to any class.
Default: 0.0001
Restriction: ’rejection_threshold’ ≥ 0, ’rejection_threshold’ ≤ 1.

Parameters

. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle


GMM handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamName ∈ {’bit_depth’, ’class_selection’, ’rejection_threshold’}

HALCON/HDevelop Reference Manual, 2024-11-13


7.3. LOOK-UP TABLE 535

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / integer / real


Values of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamValue ∈ {8, 7, 6, ’fast’, ’best’}
. ClassLUTHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut ; handle
Handle of the LUT classifier.
Result
If the parameters are valid, the operator create_class_lut_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
train_class_gmm, read_class_gmm
Possible Successors
classify_image_class_lut
Alternatives
create_class_lut_knn, create_class_lut_mlp, create_class_lut_svm
See also
classify_image_class_lut, clear_class_lut
Module
Foundation

create_class_lut_knn ( : : KNNHandle, GenParamName,


GenParamValue : ClassLUTHandle )

Create a look-up table using a k-nearest neighbors classifier (k-NN) to classify byte images.
create_class_lut_knn generates a look-up table (LUT) ClassLUTHandle using the data of a trained k-
nearest neighbors classifier (k-NN) KNNHandle to classify multi-channel byte images. By using this k-NN-based
LUT classifier, the operator classify_image_class_knn of the subsequent classification can be replaced
by the operator classify_image_class_lut. The classification is speed up considerably, because the esti-
mation of the class in every image point is no longer necessary since every possible response of the k-NN is stored
in the LUT. For the generation of the LUT, the parameter NumDim of called operator create_class_knn is
important. The number of image channels the images must have to be classified is defined in NumDim.
To create the LUT, all pixel values are classified with classify_class_knn. The returned classes are stored
in the LUT. Because of the discretization of the LUT, the accuracy of the LUT classifier could become lower than
the accuracy of classify_image_class_knn.
With ’bit_depth’ the accuracy of the classification, the required storage, and the runtime needed to create the LUT
can be controlled.
The following parameters of the k-NN-based LUT classifier can be set with GenParamName and
GenParamValue:

’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result in
a lower accuracy for the classification. One of these clusters contains
2N umDim·(8−bit_depth) pixel combinations, where NumDim denotes the dimension of the LUT, which is
specified in create_class_knn. For example, for ’bit_depth’ = 7, NumDim = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most

HALCON 24.11.1.0
536 CHAPTER 7 CLASSIFICATION

2N umDim·bit_depth+2 bytes of storage. For example, for NumDim = 3, ’bit_depth’ = 8 and number of classes
is smaller than 16, the LUT requires 8 MB of storage with internal storage optimization. The runtime for the
classification in classify_image_class_lut becomes minimal if the LUT fits into the cache.
Suggested values: 6,7,8
Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’rejection_threshold’: Threshold for the rejection of uncertain classified points of the k-NN. The parameter rep-
resents a threshold on the distance returned by the classification (see classify_class_knn). All pixels
having a distance over ’rejection_threshold’ are not assigned to any class.
Default: 5
Restriction: ’rejection_threshold’ ≥ 0.

Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamName ∈ {’bit_depth’, ’rejection_threshold’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / integer / real
Values of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamValue ∈ {8, 7, 6, 0.5, 5, 10, 50}
. ClassLUTHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut ; handle
Handle of the LUT classifier.
Result
If the parameters are valid, the operator create_class_lut_knn returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
train_class_knn, read_class_knn
Possible Successors
classify_image_class_lut
Alternatives
create_class_lut_svm, create_class_lut_gmm, create_class_lut_mlp
See also
classify_image_class_lut, clear_class_lut
Module
Foundation

create_class_lut_mlp ( : : MLPHandle, GenParamName,


GenParamValue : ClassLUTHandle )

Create a look-up table using a multi-layer perceptron to classify byte images.


create_class_lut_mlp generates a look-up table (LUT) ClassLUTHandle using the data of a trained
multi-layer perceptron (MLP) MLPHandle to classify multi-channel byte images. By using this MLP-based LUT

HALCON/HDevelop Reference Manual, 2024-11-13


7.3. LOOK-UP TABLE 537

classifier the operator classify_image_class_mlp of the subsequent classification can be replaced by the
operator classify_image_class_lut. The classification gets a major speed-up, because the estimation of
the class in every image point is no longer necessary since every possible response of the MLP is stored in the LUT.
For the generation of the LUT, the parameters NumInput, Preprocessing, and NumComponents defined
in the earlier called operator create_class_mlp are important. In NumInput, the number of image channels
the images must have to be classified is defined. By using the Preprocessing (see create_class_mlp)
the number of image channels can be transformed to NumComponents. NumComponents defines the length
of the feature vector, which the classifier classify_class_mlp handles internally. Because of perfor-
mance and disk space, the LUT is restricted to be maximal 3-dimensional. Since it replaces the operator
classify_class_mlp, NumComponents ≤ 3 must hold. If there is no preprocessing that reduces the
number of image channels (NumInput = NumComponents), all possible pixel values, which can occur in a
byte image, are classified with classify_class_mlp. The returned classes are stored in the LUT. If there
is a preprocessing that reduces the number of image channels (NumInput > NumComponents), the prepro-
cessing parameters of the MLP are stored in a separate structure of the LUT. To create the LUT, all transformed
pixel values are classified with classify_class_mlp. The returned classes are stored in the LUT. Because
of the discretization of the LUT, the accuracy of the LUT classifier could become lower than the accuracy of
classify_image_class_mlp. With ’bit_depth’ and ’class_selection’ the accuracy of the classification, the
required storage, and the runtime needed to create the LUT can be controlled.
The following parameters of the MLP-based LUT classifier can be set with GenParamName and
GenParamValue:

’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result
in a lower accuracy for the classification. One of these clusters contains 2N umComponents·(8−bit_depth)
pixel combinations, where NumComponents denotes the dimension of the LUT, which is specified in
create_class_mlp. For example, for ’bit_depth’ = 7, NumComponents = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most 2N umComponents·bit_depth+2
bytes of storage. For example, for NumComponents = 3, ’bit_depth’ = 8 and NumOutput < 16 (spec-
ified in create_class_mlp), the LUT requires 8 MB of storage with internal storage optimization. If
NumOutput = 1, the LUT requires only 2 MB of storage by using the full bit depth of the LUT. The runtime
for the classification in classify_image_class_lut becomes minimal if the LUT fits into the cache.
Suggested values: 6,7,8 Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’class_selection’: Method for the class selection for the LUT. Can be modified to control the accuracy and the
runtime needed to create the LUT classifier. The value in ’class_selection’ is ignored if the bit depth of the
LUT is maximal, thus ’bit_depth’ = 8 holds. If the bit depth of the LUT is smaller (’bit_depth’ < 8), the
classes of multiple pixel combinations will be mapped to the same LUT entry. One of these clusters contains
2N umComponents·(8−bit_depth) pixel combinations, where NumComponents denotes the dimension of the
LUT, which is specified in create_class_mlp. By choosing ’class_selection’ = ’best’, the class that
appears most often in the cluster is stored in the LUT. For ’class_selection’ = ’fast’, only one pixel of the
cluster, i.e., the pixel with the smallest value (component-wise), is classified. The returned class is stored in
the LUT. In this case, the accuracy of the subsequent classification could become lower. On the other hand,
the runtime needed to create the LUT can be reduced, which is proportional to the maximal needed storage
of the LUT, which is defined with 2N umComponents·bit_depth+2 .
List of values: ’fast’, ’best’
Default: ’fast’,
’rejection_threshold’: Threshold for the rejection of uncertain classified points of the MLP. The parameter rep-
resents a threshold on the probability measure returned by the classification (see classify_class_mlp
and evaluate_class_mlp). All pixels having a probability below ’rejection_threshold’ are not assigned
to any class.
Default: 0.5
Restriction: ’rejection_threshold’ ≥ 0, ’rejection_threshold’ ≤ 1.

HALCON 24.11.1.0
538 CHAPTER 7 CLASSIFICATION

Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamName ∈ {’bit_depth’, ’class_selection’, ’rejection_threshold’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / integer / real
Values of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamValue ∈ {8, 7, 6, ’fast’, ’best’}
. ClassLUTHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut ; handle
Handle of the LUT classifier.
Result
If the parameters are valid, the operator create_class_lut_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
train_class_mlp, read_class_mlp
Possible Successors
classify_image_class_lut
Alternatives
create_class_lut_gmm, create_class_lut_knn, create_class_lut_svm
See also
classify_image_class_lut, clear_class_lut
Module
Foundation

create_class_lut_svm ( : : SVMHandle, GenParamName,


GenParamValue : ClassLUTHandle )

Create a look-up table using a Support-Vector-Machine to classify byte images.


create_class_lut_svm generates a look-up table (LUT) ClassLUTHandle using the data of a trained
Support-Vector-Machine (SVM) SVMHandle to classify multi-channel byte images. By using this SVM-
based LUT classifier the operator classify_image_class_svm of the subsequent classification can be
replaced by the operator classify_image_class_lut. The classification gets a major speed-up, be-
cause the estimation of the class in every image point is no longer necessary since every possible re-
sponse of the SVM is stored in the LUT. For the generation of the LUT, the parameters NumFeatures,
Preprocessing, and NumComponents defined in the earlier called operator create_class_svm are
important. In NumFeatures, the number of image channels the images must have to be classified is de-
fined. By using the Preprocessing (see create_class_svm) the number of image channels can be trans-
formed to NumComponents. NumComponents defines the length of the feature vector, which the classifier
classify_class_svm handles internally. Because of performance and disk space, the LUT is restricted
to be maximal 3-dimensional. Since it replaces the operator classify_class_svm, NumComponents
≤ 3 must hold. If there is no preprocessing that reduces the number of image channels (NumFeatures
= NumComponents), all possible pixel values, which can occur in a byte image, are classified with

HALCON/HDevelop Reference Manual, 2024-11-13


7.3. LOOK-UP TABLE 539

classify_class_svm. The returned classes are stored in the LUT. If there is a preprocessing that reduces
the number of image channels (NumFeatures > NumComponents), the preprocessing parameters of the SVM
are stored in a separate structure of the LUT. To create the LUT, all transformed pixel values are classified with
classify_class_svm. The returned classes are stored in the LUT. Because of the discretization of the LUT,
the accuracy of the LUT classifier could become lower than the accuracy of classify_image_class_svm.
With ’bit_depth’ and ’class_selection’ the accuracy of the classification, the required storage, and the runtime
needed to create the LUT can be controlled.
The following parameters of the SVM-based LUT classifier can be set with GenParamName and
GenParamValue:

’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result
in a lower accuracy for the classification. One of these clusters contains 2N umComponents·(8−bit_depth)
pixel combinations, where NumComponents denotes the dimension of the LUT, which is specified in
create_class_svm. For example, for ’bit_depth’ = 7, NumComponents = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most 2N umComponents·bit_depth+2
bytes of storage. For example, for NumComponents = 3, ’bit_depth’ = 8 and NumClasses < 16 (spec-
ified in create_class_svm), the LUT requires 8 MB of storage with internal storage optimization. If
NumClasses = 1, the LUT requires only 2 MB of storage by using the full bit depth of the LUT. The
runtime for the classification in classify_image_class_lut becomes minimal if the LUT fits into
the cache.
Suggested values: 6,7,8
Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’class_selection’: Method for the class selection for the LUT. Can be modified to control the accuracy and the
runtime needed to create the LUT classifier. The value in ’class_selection’ is ignored if the bit depth of the
LUT is maximal, thus ’bit_depth’ = 8 holds. If the bit depth of the LUT is smaller (’bit_depth’ < 8), the
classes of multiple pixel combinations will be mapped to the same LUT entry. One of these clusters contains
2N umComponents·(8−bit_depth) pixel combinations, where NumComponents denotes the dimension of the
LUT, which is specified in create_class_svm. By choosing ’class_selection’ = ’best’, the class that
appears most often in the cluster is stored in the LUT. For ’class_selection’ = ’fast’, only one pixel of the
cluster, i.e., the pixel with the smallest value (component-wise), is classified. The returned class is stored in
the LUT. In this case, the accuracy of the subsequent classification could become lower. On the other hand,
the runtime needed to create the LUT can be reduced, which is proportional to the maximal needed storage
of the LUT, which is defined with 2N umComponents·bit_depth+2 .
List of values: ’fast’, ’best’
Default: ’fast’

Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamName ∈ {’bit_depth’, ’class_selection’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string / integer
Values of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamValue ∈ {8, 7, 6, ’fast’, ’best’}
. ClassLUTHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut ; handle
Handle of the LUT classifier.
Result
If the parameters are valid, the operator create_class_lut_svm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

HALCON 24.11.1.0
540 CHAPTER 7 CLASSIFICATION

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
train_class_svm, read_class_svm
Possible Successors
classify_image_class_lut
Alternatives
create_class_lut_gmm, create_class_lut_knn, create_class_lut_mlp
See also
classify_image_class_lut, clear_class_lut
Module
Foundation

7.4 Misc

add_sample_class_train_data ( : : ClassTrainDataHandle, Order,


Features, ClassID : )

Add a training sample to training data.


add_sample_class_train_data adds a training sample to the training data given by
ClassTrainDataHandle. The training sample is given by Features and ClassID. Features is
the feature vector of the sample, and consequently must be a real vector of length NumDim, as specified in
create_class_train_data. ClassID is the class of the sample. More than one training sample can be
added at once. In this case the parameter Order defines in which order the elements of the feature vectors are
passed in Features. If it is set to ’row’, the first training sample comes first, the second comes second, and so
on. If it is set to ’column’, the first dimension of all feature vectors comes first, and then the second dimension of
all feature vectors, and so on. The third possible mode for Order is ’feature_column’. This mode expects features
which were grouped before with set_feature_lengths_class_train_data to come completely and
row-wise before the second feature, and so on.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. Order (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The order of the feature vector.
Default: ’row’
List of values: Order ∈ {’row’, ’column’, ’feature_column’}
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Feature vector of the training sample.
. ClassID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Class of the training sample.
Result
If the parameters are valid, the operator add_sample_class_train_data returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


7.4. MISC 541

This operator modifies the state of the following input parameter:


• ClassTrainDataHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_train_data
Possible Successors
add_class_train_data_svm, add_class_train_data_knn, add_class_train_data_gmm,
add_class_train_data_mlp
See also
create_class_train_data
Module
Foundation

clear_class_train_data ( : : ClassTrainDataHandle : )

Clears training data for classifiers.


clear_class_train_data clears the training data given in ClassTrainDataHandle. After calling
clear_class_train_data, ClassTrainDataHandle becomes invalid.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of training data for a classifier.
Result
If the parameters are valid, the operator clear_class_train_data returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• ClassTrainDataHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_train_data
See also
create_class_train_data
Module
Foundation

create_class_train_data ( : : NumDim : ClassTrainDataHandle )

Create a handle for training data for classifiers.


create_class_train_data creates a handle for training data for classifiers. The handle is returned in
ClassTrainDataHandle. The dimension of the feature vectors is specified with NumDim. Only feature
vectors of this length can be added to the handle.

HALCON 24.11.1.0
542 CHAPTER 7 CLASSIFICATION

Parameters
. NumDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of dimensions of the feature vector.
Default: 10
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
Example

* Find out which of the two features distinguishes two Classes


NameFeature1 := 'Good Feature'
NameFeature2 := 'Bad Feature'
LengthFeature1 := 3
LengthFeature2 := 2
* Create training data
create_class_train_data (LengthFeature1+LengthFeature2,\
ClassTrainDataHandle)
* Define the features which are in the training data
set_feature_lengths_class_train_data (ClassTrainDataHandle, [LengthFeature1,\
LengthFeature2], [NameFeature1, NameFeature2])
* Add training data
* |Feat1| |Feat2|
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 2,1 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 2,1 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 3,4 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 3,4 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [0,0,1, 5,6 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,3,2, 5,6 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [0,0,1, 5,6 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,3,2, 5,6 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [0,0,1, 5,6 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,3,2, 5,6 ], 1)
* Add more data
* ...
* Select the better feature with the classifier of your choice
select_feature_set_knn (ClassTrainDataHandle, 'greedy', [], [], KNNHandle,\
SelectedFeature, Score)
select_feature_set_svm (ClassTrainDataHandle, 'greedy', [], [], SVMHandle,\
SelectedFeature, Score)
select_feature_set_mlp (ClassTrainDataHandle, 'greedy', [], [], MLPHandle,\
SelectedFeature, Score)
select_feature_set_gmm (ClassTrainDataHandle, 'greedy', [], [], GMMHandle,\
SelectedFeature, Score)
* Use the classifier
* ...

Result
If the parameters are valid, the operator create_class_train_data returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.

HALCON/HDevelop Reference Manual, 2024-11-13


7.4. MISC 543

Possible Successors
add_sample_class_knn, train_class_knn
Alternatives
create_class_svm, create_class_mlp
See also
select_feature_set_knn, read_class_knn
Module
Foundation

deserialize_class_train_data (
: : SerializedItemHandle : ClassTrainDataHandle )

Deserialize serialized training data for classifiers.


deserialize_class_train_data deserializes training data for classifiers, that were serialized by
serialize_class_train_data (see fwrite_serialized_item for an introduction of the basic prin-
ciple of serialization). The serialized training data is defined by the handle SerializedItemHandle.
The deserialized values are stored in an automatically created training data block with the handle
ClassTrainDataHandle.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
Result
If the parameters are valid, the operator deserialize_class_train_data returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
serialize_class_train_data
Possible Successors
fwrite_serialized_item
See also
create_class_train_data
Module
Foundation

get_sample_class_train_data ( : : ClassTrainDataHandle,
IndexSample : Features, ClassID )

Return a training sample from training data.


get_sample_class_train_data reads a training sample from the training data given by
ClassTrainDataHandle that was added, e.g., with add_sample_class_train_data. The in-
dex of the sample is specified with IndexSample. The index is counted from 0. That means that
IndexSample must be a number between 0 and NumSamples − 1, where NumSamples can be deter-
mined with get_sample_num_class_train_data. The training sample is returned in Features

HALCON 24.11.1.0
544 CHAPTER 7 CLASSIFICATION

and ClassID. Features is a feature vector of length NumDim (see create_class_train_data) and
ClassID is the class of the feature vector.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of training data for a classifier.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Class of the training sample.
Result
If the parameters are valid, the operator get_sample_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_train_data
See also
create_class_train_data
Module
Foundation

get_sample_num_class_train_data (
: : ClassTrainDataHandle : NumSamples )

Return the number of training samples stored in the training data.


get_sample_num_class_train_data returns in NumSamples the number of training samples which are
stored in the training data specified by ClassTrainDataHandle.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of training data.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If ClassTrainDataHandle is valid, the operator get_sample_num_class_train_data returns the
value 2 (H_MSG_TRUE). If necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_train_data
Possible Successors
get_sample_class_train_data

HALCON/HDevelop Reference Manual, 2024-11-13


7.4. MISC 545

See also
create_class_train_data
Module
Foundation

read_class_train_data ( : : FileName : ClassTrainDataHandle )

Read the training data for classifiers from a file.


read_class_train_data reads the saved training data for classifiers from the file FileName (see
write_class_train_data). The default HALCON file extension for training data for a classifier is ’ctd’.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the training data.
File extension: .ctd
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
Result
read_class_train_data returns 2 (H_MSG_TRUE). An exception is raised if it was not possible to open
the file FileName or the file has the wrong format.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
See also
create_class_train_data, write_class_train_data
Module
Foundation

select_sub_feature_class_train_data ( : : ClassTrainDataHandle,
SubFeatureIndices : SelectedClassTrainDataHandle )

Select certain features from training data to create training data containing less features.
select_sub_feature_class_train_data selects certain features from the training data
in ClassTrainDataHandle and returns the subset in SelectedClassTrainDataHandle.
The features that should be selected can be chosen by SubFeatureIndices. If
set_feature_lengths_class_train_data was not called before, the indices refer to the columns.
If set_feature_lengths_class_train_data was called before, the grouping defined there is
relevant for the meaning of the indices. The entry n in the list selects then the n-th feature group. If
set_feature_lengths_class_train_data was called with names for the feature groups, those names
can be used instead of the indices.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. SubFeatureIndices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / string
Indices or names to select the subfeatures or columns.
. SelectedClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the reduced training data.

HALCON 24.11.1.0
546 CHAPTER 7 CLASSIFICATION

Example

* Find out which of the two features distinguishes two Classes


NameFeature1 := 'Good Feature'
NameFeature2 := 'Bad Feature'
LengthFeature1 := 3
LengthFeature2 := 2
* Create training data
create_class_train_data (LengthFeature1+LengthFeature2,\
ClassTrainDataHandle)
* Define the features which are in the training data
set_feature_lengths_class_train_data (ClassTrainDataHandle, [LengthFeature1,\
LengthFeature2], [NameFeature1, NameFeature2])
* Add training data
* |Feat1| |Feat2|
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 2,1 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 2,1 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 3,4 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 3,4 ], 1)
* Add more data
* ...
* Select one of the features
select_sub_feature_class_train_data (ClassTrainDataHandle, NameFeature1, \
SelectedClassTrainDataHandle)
* Add training data to a classifier
create_class_knn (LengthFeature1, KNNHandle)
add_class_train_data_knn (KNNHandle, SelectedClassTrainDataHandle)
train_class_knn (KNNHandle, [], [])
* Use the classifier
* ...

Result
If the parameters are valid, the operator select_sub_feature_class_train_data returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_class_train_data, add_sample_class_train_data,
set_feature_lengths_class_train_data
Possible Successors
add_class_train_data_gmm, add_class_train_data_mlp, add_class_train_data_svm,
add_class_train_data_knn
Module
Foundation

serialize_class_train_data (
: : ClassTrainDataHandle : SerializedItemHandle )

Serialize training data for classifiers.


serialize_class_train_data serializes training data for classifiers and its stored training samples (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data that

HALCON/HDevelop Reference Manual, 2024-11-13


7.4. MISC 547

is written in a file by write_class_train_data is converted to a serialized item. The training data


is defined by the handle ClassTrainDataHandle. The serialized training data is returned by the handle
SerializedItemHandle and can be deserialized by deserialize_class_train_data.
Parameters

. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle


Handle of the training data.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_class_train_data returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Successors
deserialize_class_train_data
See also
create_class_train_data, read_class_train_data
Module
Foundation

set_feature_lengths_class_train_data ( : : ClassTrainDataHandle,
SubFeatureLength, Names : )

Define subfeatures in training data.


set_feature_lengths_class_train_data defines subfeatures in the training data in
ClassTrainDataHandle. The subfeatures are defined in SubFeatureLength by a set of lengths
that groups the previously added columns subsequently into subfeatures. It is not possible to group columns which
are not subsequent. The sum over all entries in SubFeatureLength must be equal to the number of dimensions
set in create_class_train_data with the parameter NumDim. Optionally, names for all subsets can be
defined in Names.
An exemplary situation in which this operator is helpful is described here: Two different data sources are available.
Both data sources provide a vector of a certain length. The first data source provides data of length n and the second
of length m. In order to automatically decide which of the data sources is more valuable for a certain classification
problem, training data can be created that contains both data sources. E.g., if create_class_train_data
was called with NumDim = n + m = w, then set_feature_lengths_class_train_data can be called
with [n,m] in SubFeatureLength and [Name1, Name2] in Names to describe this situation for a later usage
of operators like select_feature_set_knn or select_feature_set_svm. Then the classification
problem has to be specified via calls of add_sample_class_train_data, by giving a vector of the first
data source and a vector of the second data source as the combined feature vector of length w. The result of the
call of select_feature_set_knn would then be either [Name1] if the first is more relevant, [Name2] if the
second is more relevant or [Name1, Name2] if both are necessary.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data that should be partitioned into subfeatures.
. SubFeatureLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Length of the subfeatures.
. Names (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the subfeatures.

HALCON 24.11.1.0
548 CHAPTER 7 CLASSIFICATION

Example

* Find out which of the two features distinguishes two Classes


NameFeature1 := 'Good Feature'
NameFeature2 := 'Bad Feature'
LengthFeature1 := 3
LengthFeature2 := 2
* Create training data
create_class_train_data (LengthFeature1+LengthFeature2,\
ClassTrainDataHandle)
* Define the features which are in the training data
set_feature_lengths_class_train_data (ClassTrainDataHandle, [LengthFeature1,\
LengthFeature2], [NameFeature1, NameFeature2])
* Add training data
* |Feat1| |Feat2|
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 2,1 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 2,1 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 3,4 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 3,4 ], 1)
* Add more data
* ...
* Select the better feature
select_feature_set_knn (ClassTrainDataHandle, 'greedy', [], [], KNNHandle,\
SelectedFeature, Score)
classify_class_knn (KNNHandle, [1,1,1], Result, Rating)
classify_class_knn (KNNHandle, [2,2,2], Result, Rating)
* Use the classifier
* ...

Result
If the parameters are valid, the operator set_feature_lengths_class_train_data returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• ClassTrainDataHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_train_data, add_sample_class_train_data
Possible Successors
select_feature_set_knn, select_feature_set_svm, select_feature_set_mlp,
select_feature_set_gmm
Module
Foundation

write_class_train_data ( : : ClassTrainDataHandle, FileName : )

Save the training data for classifiers in a file.

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 549

write_class_train_data writes the training data for classifiers ClassTrainDataHandle to the file
given by FileName. The classifier can be read again with read_class_train_data. The default HALCON
file extension for the training data is ’ctd’.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the file in which the training data will be written.
File extension: .ctd
Result
write_class_train_data returns 2 (H_MSG_TRUE). An exception is raised if it was not possible to open
file FileName.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
See also
create_class_train_data, read_class_train_data
Module
Foundation

7.5 Neural Nets

add_class_train_data_mlp ( : : MLPHandle,
ClassTrainDataHandle : )

Add training data to a multilayer perceptron (MLP).


add_class_train_data_mlp adds the training data specified by ClassTrainDataHandle to a multi-
layer perceptron (MLP) specified by MLPHandle.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle which receives the training data.
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Training data for a classifier.
Result
If the parameters are valid, the operator add_class_train_data_mlp returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MLPHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_mlp, create_class_train_data

HALCON 24.11.1.0
550 CHAPTER 7 CLASSIFICATION

Possible Successors
get_sample_class_mlp
Alternatives
add_sample_class_mlp
See also
create_class_mlp
Module
Foundation

add_sample_class_mlp ( : : MLPHandle, Features, Target : )

Add a training sample to the training data of a multilayer perceptron.


add_sample_class_mlp adds a training sample to the multilayer perceptron (MLP) given by MLPHandle.
The training sample is given by Features and Target. Features is the feature vector of the sample, and
consequently must be a real vector of length NumInput, as specified in create_class_mlp. Target is the
target vector of the sample, which must have the length NumOutput (see create_class_mlp) for all three
types of activation functions of the MLP (exception: see below). If the MLP is used for regression (function
approximation), i.e., if OutputFunction = ’linear’, Target is the value of the function at the coordinate
Features. In this case, Target can contain arbitrary real numbers. For OutputFunction = ’logistic’,
Target can only contain the values 0.0 and 1.0. A value of 1.0 specifies that the attribute in question is present,
while a value of 0.0 specifies that the attribute is absent. Because in this case the attributes are independent,
arbitrary combinations of 0.0 and 1.0 can be passed. For OutputFunction = ’softmax’, Target also can only
contain the values 0.0 and 1.0. In contrast to OutputFunction = ’logistic’, the value 1.0 must be present for
exactly one element of the tuple Target. The location in the tuple designates the class of the sample. For ease of
use, a single integer value may be passed if OutputFunction = ’softmax’. This value directly designates the
class of the sample, which is counted from 0, i.e., the class must be an integer between 0 and NumOutput − 1.
The class is converted to a target vector of length NumOutput internally.
Before the MLP can be trained with train_class_mlp, all training samples must be added to the MLP with
add_sample_class_mlp.
The number of currently stored training samples can be queried with get_sample_num_class_mlp. Stored
training samples can be read out again with get_sample_class_mlp.
Normally, it is useful to save the training samples in a file with write_samples_class_mlp to facilitate
reusing the samples, and to facilitate that, if necessary, new training samples can be added to the data set, and
hence to facilitate that a newly created MLP can be trained anew with the extended data set.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample to be stored.
. Target (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / real
Class or target vector of the training sample to be stored.
Result
If the parameters are valid, the operator add_sample_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MLPHandle

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 551

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp, write_samples_class_mlp
Alternatives
read_samples_class_mlp
See also
clear_samples_class_mlp, get_sample_num_class_mlp, get_sample_class_mlp
Module
Foundation

classify_class_mlp ( : : MLPHandle, Features, Num : Class,


Confidence )

Calculate the class of a feature vector by a multilayer perceptron.


classify_class_mlp computes the best Num classes of the feature vector Features with the multilayer
perceptron (MLP) MLPHandle and returns the classes in Class and the corresponding confidences (probabil-
ities) of the classes in Confidence. Before calling classify_class_mlp, the MLP must be trained with
train_class_mlp.
classify_class_mlp can only be called if the MLP is used as a classifier with OutputFunction = ’soft-
max’ (see create_class_mlp). Otherwise, an error message is returned. classify_class_mlp cor-
responds to a call to evaluate_class_mlp and an additional step that extracts the best Num classes. As
described with evaluate_class_mlp, the output values of the MLP can be interpreted as probabilities of the
occurrence of the respective classes. In most cases it should be sufficient to use Num = 1 in order to decide whether
the probability of the best class is high enough. In some applications it may be interesting to also take the second
best class into account (Num = 2), particularly if it can be expected that the classes show a significant degree of
overlap.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of best classes to determine.
Default: 1
Suggested values: Num ∈ {1, 2, 3, 4, 5}
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Result of classifying the feature vector with the MLP.
. Confidence (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Confidence(s) of the class(es) of the feature vector.
Result
If the parameters are valid, the operator classify_class_mlp returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON 24.11.1.0
552 CHAPTER 7 CLASSIFICATION

Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
apply_dl_classifier, evaluate_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

clear_class_mlp ( : : MLPHandle : )

Clear a multilayer perceptron.


clear_class_mlp clears the multilayer perceptron (MLP) given by MLPHandle and frees all memory re-
quired for the MLP. After calling clear_class_mlp, the MLP can no longer be used. The handle MLPHandle
becomes invalid.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp(-array) ; handle
MLP handle.
Result
If MLPHandle is valid, the operator clear_class_mlp returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• MLPHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
classify_class_mlp, evaluate_class_mlp
See also
create_class_mlp, read_class_mlp, write_class_mlp, train_class_mlp
Module
Foundation

clear_samples_class_mlp ( : : MLPHandle : )

Clear the training data of a multilayer perceptron.


clear_samples_class_mlp clears all training samples that have been added to the multilayer
perceptron (MLP) MLPHandle with add_sample_class_mlp or read_samples_class_mlp.
clear_samples_class_mlp should only be used if the MLP is trained in the same process that uses the
MLP for evaluation with evaluate_class_mlp or for classification with classify_class_mlp. In

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 553

this case, the memory required for the training samples can be freed with clear_samples_class_mlp,
and hence memory can be saved. In the normal usage, in which the MLP is trained offline and written to
a file with write_class_mlp, it is typically unnecessary to call clear_samples_class_mlp because
write_class_mlp does not save the training samples, and hence the online process, which reads the MLP
with read_class_mlp, requires no memory for the training samples.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp(-array) ; handle
MLP handle.
Result
If the parameters are valid, the operator clear_samples_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MLPHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
train_class_mlp, write_samples_class_mlp
See also
create_class_mlp, clear_class_mlp, add_sample_class_mlp, read_samples_class_mlp
Module
Foundation

create_class_mlp ( : : NumInput, NumHidden, NumOutput,


OutputFunction, Preprocessing, NumComponents,
RandSeed : MLPHandle )

Create a multilayer perceptron for classification or regression.


create_class_mlp creates a neural net in the form of a multilayer perceptron (MLP), which can be used for
classification or regression (function approximation), depending on how OutputFunction is set. The MLP
consists of three layers: an input layer with NumInput input variables (units, neurons), a hidden layer with
NumHidden units, and an output layer with NumOutput output variables. The MLP performs the following
steps to calculate the activations zj of the hidden units from the input data xi (the so-called feature vector):

ni
(1) (1) (1)
X
aj = wji xi + bj , j = 1, . . . , nh
i=1
(1) 
zj = tanh aj , j = 1, . . . , nh

(1) (1)
Here, the matrix wji and the vector bj are the weights of the input layer (first layer) of the MLP. In the hidden
layer (second layer), the activations zj are transformed in a first step by using linear combinations of the variables
in an analogous manner as above:

nh
(2) (2) (2)
X
ak = wkj zj + bk , k = 1, . . . , no
j=1

HALCON 24.11.1.0
554 CHAPTER 7 CLASSIFICATION

(2) (2)
Here, the matrix wkj and the vector bk are the weights of the second layer of the MLP.
The activation function used in the output layer can be determined by setting OutputFunction. For
OutputFunction = ’linear’, the data are simply copied:

(2)
yk = ak , k = 1, . . . , no

This type of activation function should be used for regression problems (function approximation). This activation
function is not suited for classification problems.
For OutputFunction = ’logistic’, the activations are computed as follows:

1
yk = (2) 
, k = 1, . . . , no
1 + exp − ak

This type of activation function should be used for classification problems with multiple (NumOutput) indepen-
dent logical attributes as output. This kind of classification problem is relatively rare in practice.
For OutputFunction = ’softmax’, the activations are computed as follows:

(2) 
exp ak
yk = Pno (2) 
, k = 1, . . . , no
l=1 exp al

This type of activation function should be used for common classification problems with multiple (NumOutput)
mutually exclusive classes as output. In particular, OutputFunction = ’softmax’ must be used for the classifi-
cation of pixel data with classify_image_class_mlp.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the MLP. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification or evaluation.
For Preprocessing = ’normalization’, the feature vectors are normalized by subtracting the mean of the
training vectors and dividing the result by the standard deviation of the individual components of the training
vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization
does not change the length of the feature vector. NumComponents is ignored in this case. This transformation can
be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively,
or for data in which the components of the feature vectors are measured in different units (e.g., if some of the
data are gray value features and some are region features, or if region features are mixed, e.g., ’circularity’
(unit: scalar) and ’area’ (unit: pixel squared)). In these cases, the training of the net will typically require fewer
iterations than without normalization.
For Preprocessing = ’principal_components’, a principal component analysis is performed. First, the feature
vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that
decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and
the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the
transformed features that contain the most variation is contained in the first components of the transformed feature
vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to determine how many of the transformed feature vector components should be
used. Up to NumInput components can be selected. The operator get_prep_info_class_mlp can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated.
In contrast to the above three transformations, which can be used for all MLP types, the transformation spec-
ified by Preprocessing = ’canonical_variates’ can only be used if the MLP is used as a classifier with

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 555

OutputFunction = ’softmax’). The computation of the canonical variates is also called linear discrimi-
nant analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the
training vectors on average over all classes is computed. At the same time, the transformation maximally sepa-
rates the mean values of the individual classes. As for Preprocessing = ’principal_components’, the trans-
formed components are sorted by information content, and hence transformed components with little informa-
tion content can be omitted. For canonical variates, up to min(NumOutput − 1, NumInput) components can
be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_mlp. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the actual number of
input units of the MLP is determined by NumComponents, whereas NumInput determines the dimensionality
of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transfor-
mations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this,
the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.
Usually, NumHidden should be selected in the order of magnitude of NumInput and NumOutput. In many
cases, much smaller values of NumHidden already lead to very good classification results. If NumHidden is
chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e.,
the MLP learns the training data very well, but does not return very good results on unknown data.
create_class_mlp initializes the above described weights with random numbers. To ensure that the results of
training the classifier with train_class_mlp are reproducible, the seed value of the random number generator
is passed in RandSeed. If the training results in a relatively large error, it sometimes may be possible to achieve
a smaller error by selecting a different value for RandSeed and retraining an MLP.
After the MLP has been created, typically training samples are added to the MLP by repeatedly calling
add_sample_class_mlp or read_samples_class_mlp. After this, the MLP is typically trained us-
ing train_class_mlp. Hereafter, the MLP can be saved using write_class_mlp. Alternatively, the MLP
can be used immediately after training to evaluate data using evaluate_class_mlp or, if the MLP is used as
a classifier (i.e., for OutputFunction = ’softmax’), to classify data using classify_class_mlp.
The training of the MLP will usually result in very sharp boundaries between the different classes, i.e., the confi-
dence for one class will drop from close to 1 (within the region of the class) to close to 0 (within the region of a
different class) within a very narrow “band” in the feature space. If the classes do not overlap, this transition hap-
pens at a suitable location between the classes; if the classes overlap, the transition happens at a suitable location
within the overlapping area. While this sharp transition is desirable in many applications, in some applications
a smoother transition between different classes (i.e., a transition within a wider “band” in the feature space) is
desirable to reflect a level of uncertainty within the region in the feature space between the classes. Furthermore,
as described above, it may be desirable to prevent overfitting of the MLP to the training data. For these purposes,
the MLP can be regularized by using set_regularization_params_class_mlp.
An MLP, as defined above, has no inherent capability for novelty detection, i.e., it will classify a random fea-
ture vector into one of the classes with a confidence close to 1 (unless the random feature vector happens to
lie in a region of the feature space in which the training samples of different classes overlap). In some appli-
cations, however, it is desirable to reject feature vectors that do not lie close to any class, where “closesness”
defined by the proximity of the feature vector to the collection of feature vectors in the training set. To pro-
vide an MLP with the ability for novelty detection, i.e., to reject feature vectors that do not belong to any class,
an explicit rejection class can be created by setting NumOutput to the number of actual classes plus 1. Then,
set_rejection_params_class_mlp can be used to configure train_class_mlp to automatically gen-
erate samples for this rejection class.
The combination of regularization and an automatic generation of a rejection class is useful in many applications
since it provides a smooth transition between the actual classes and from the actual classes to the rejection class.
This reflects the requirement of these applications that only feature vectors within the area of the feature space
that corresponds to the training samples of each class should have a confidence close to 1, whereas random feature
vectors not belonging to any class should have a confidence close to 0, and that transitions between the classes
should be smooth, reflecting a growing degree of uncertainty the farther a feature vector lies from the respective
class. In particular, OCR applications sometimes have this requirement (see create_ocr_class_mlp).
A comparison of the MLP and the support vector machine (SVM) (see create_class_svm) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be preferred in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.

HALCON 24.11.1.0
556 CHAPTER 7 CLASSIFICATION

Parameters
. NumInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the MLP.
Default: 20
Suggested values: NumInput ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumInput >= 1
. NumHidden (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of hidden units of the MLP.
Default: 10
Suggested values: NumHidden ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150}
Restriction: NumHidden >= 1
. NumOutput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of output variables (classes) of the MLP.
Default: 5
Suggested values: NumOutput ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150}
Restriction: NumOutput >= 1
. OutputFunction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the activation function in the output layer of the MLP.
Default: ’softmax’
List of values: OutputFunction ∈ {’linear’, ’logistic’, ’softmax’}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’normalization’
List of values: Preprocessing ∈ {’none’, ’normalization’, ’principal_components’, ’canonical_variates’}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default: 10
Suggested values: NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumComponents >= 1
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed value of the random number generator that is used to initialize the MLP with random values.
Default: 42
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
Example

* Use the MLP for regression (function approximation)


create_class_mlp (1, NumHidden, 1, 'linear', 'none', 1, 42, MLPHandle)
* Generate the training data
* D = [...]
* T = [...]
* Add the training data
for J := 0 to NumData-1 by 1
add_sample_class_mlp (MLPHandle, D[J], T[J])
endfor
* Train the MLP
train_class_mlp (MLPHandle, 200, 0.001, 0.001, Error, ErrorLog)
* Generate test data
* X = [...]
* Compute the output of the MLP on the test data
for J := 0 to N-1 by 1
evaluate_class_mlp (MLPHandle, X[J], Y)
endfor

* Use the MLP for classification


create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 557

'normalization', NumIn, 42, MLPHandle)


* Generate and add the training data
for J := 0 to NumData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = [...]
add_sample_class_mlp (MLPHandle, Data, Class)
endfor
* Train the MLP
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Use the MLP to classify unknown data
for J := 0 to N-1 by 1
* Extract features
* Features = [...]
classify_class_mlp (MLPHandle, Features, 1, Class, Confidence)
endfor

Result
If the parameters are valid, the operator create_class_mlp returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
add_sample_class_mlp, set_regularization_params_class_mlp,
set_rejection_params_class_mlp
Alternatives
read_dl_classifier, create_class_svm, create_class_gmm
See also
clear_class_mlp, train_class_mlp, classify_class_mlp, evaluate_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

deserialize_class_mlp ( : : SerializedItemHandle : MLPHandle )

Deserialize a serialized multilayer perceptron.


deserialize_class_mlp deserializes a multilayer perceptron (MLP) (including its training samples),
that was serialized by serialize_class_mlp (see fwrite_serialized_item for an introduction
of the basic principle of serialization). The serialized multilayer perceptron is defined by the handle
SerializedItemHandle. The deserialized values are stored in an automatically created multilayer perceptron
with the handle MLPHandle.

HALCON 24.11.1.0
558 CHAPTER 7 CLASSIFICATION

Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
Result
If the parameters are valid, the operator deserialize_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_class_mlp
Possible Successors
classify_class_mlp, evaluate_class_mlp, create_class_lut_mlp
See also
create_class_mlp, write_class_mlp, serialize_class_mlp
Module
Foundation

evaluate_class_mlp ( : : MLPHandle, Features : Result )

Calculate the evaluation of a feature vector by a multilayer perceptron.


evaluate_class_mlp computes the result Result of evaluating the feature vector Features with
the multilayer perceptron (MLP) MLPHandle. The formulas used for the evaluation are described
with create_class_mlp. Before calling evaluate_class_mlp, the MLP must be trained with
train_class_mlp.
If the MLP is used for regression (function approximation), i.e., if (OutputFunction = ’linear’), Result
is the value of the function at the coordinate Features. For OutputFunction = ’logistic’ and ’softmax’,
the values in Result can be interpreted as probabilities. Hence, for OutputFunction = ’logistic’ the ele-
ments of Result represent the probabilities of the presence of the respective independent attributes. Typically,
a threshold of 0.5 is used to decide whether the attribute is present or not. Depending on the application, other
thresholds may be used as well. For OutputFunction = ’softmax’ usually the position of the maximum value
of Result is interpreted as the class of the feature vector, and the corresponding value as the probability of
the class. In this case, classify_class_mlp should be used instead of evaluate_class_mlp because
classify_class_mlp directly returns the class and corresponding probability.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Result of evaluating the feature vector with the MLP.
Result
If the parameters are valid, the operator evaluate_class_mlp returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 559

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
classify_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

get_class_train_data_mlp ( : : MLPHandle : ClassTrainDataHandle )

Get the training data of a multilayer perceptron (MLP).


get_class_train_data_mlp gets the training data of a multilayer perceptron (MLP) and returns it in
ClassTrainDataHandle.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
Handle of a MLP that contains training data.
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data of the classifier.
Result
If the parameters are valid, the operator get_class_train_data_mlp returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
add_class_train_data_svm, add_class_train_data_gmm, add_class_train_data_knn
See also
create_class_train_data
Module
Foundation

get_params_class_mlp ( : : MLPHandle : NumInput, NumHidden,


NumOutput, OutputFunction, Preprocessing, NumComponents )

Return the parameters of a multilayer perceptron.

HALCON 24.11.1.0
560 CHAPTER 7 CLASSIFICATION

get_params_class_mlp returns the parameters of a multilayer perceptron (MLP) that were specified when
the MLP was created with create_class_mlp. This is particularly useful if the MLP was read from a file with
read_class_mlp. The output of get_params_class_mlp can, for example, be used to check whether the
feature vectors and, if necessary, the target data to be used with the MLP have the correct lengths. For a description
of the parameters, see create_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. NumInput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the MLP.
. NumHidden (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of hidden units of the MLP.
. NumOutput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of output variables (classes) of the MLP.
. OutputFunction (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the activation function in the output layer of the MLP.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features.
Result
If the parameters are valid, the operator get_params_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_class_mlp, read_class_mlp
Possible Successors
add_sample_class_mlp, train_class_mlp
See also
evaluate_class_mlp, classify_class_mlp
Module
Foundation

get_prep_info_class_mlp ( : : MLPHandle,
Preprocessing : InformationCont, CumInformationCont )

Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
get_prep_info_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_mlp.
The information content is derived from the variations of the transformed components of the feature vector, i.e.,
it is computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput − 1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 561

the sums of the first n elements of InformationCont. To use get_prep_info_class_mlp, a suffi-


cient number of samples must be added to the multilayer perceptron (MLP) given by MLPHandle by using
add_sample_class_mlp or read_samples_class_mlp.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_mlp. The call to get_prep_info_class_mlp al-
ready requires the creation of an MLP, and hence the setting of NumComponents in create_class_mlp
to an initial value. However, if get_prep_info_class_mlp is called it is typically not known how many
components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step
approach should typically be used to select NumComponents: In a first step, an MLP with the maximum number
for NumComponents is created (NumInput for ’principal_components’ and min(NumOutput−1NumInput)
for ’canonical_variates’). Then, the training samples are added to the MLP and are saved in a file using
write_samples_class_mlp. Subsequently, get_prep_info_class_mlp is used to determine the in-
formation content of the components, and with this NumComponents. After this, a new MLP with the desired
number of components is created, and the training samples are read with read_samples_class_mlp. Finally,
the MLP is trained with train_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’principal_components’
List of values: Preprocessing ∈ {’principal_components’, ’canonical_variates’}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Cumulative information content of the transformed feature vectors.
Example

* Create the initial MLP


create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'principal_components', NumIn, 42, MLPHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = [...]
add_sample_class_mlp (MLPHandle, Data, Class)
endfor
write_samples_class_mlp (MLPHandle, 'samples.mtf')
* Compute the information content of the transformed features
get_prep_info_class_mlp (MLPHandle, 'principal_components',\
InformationCont, CumInformationCont)
* Determine NumComp by inspecting InformationCont and CumInformationCont
* NumComp = [...]
* Create the actual MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'principal_components', NumComp, 42, MLPHandle)
* Train the MLP
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, 'classifier.mlp')

Result
If the parameters are valid, the operator get_prep_info_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.

HALCON 24.11.1.0
562 CHAPTER 7 CLASSIFICATION

get_prep_info_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
clear_class_mlp, create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

get_regularization_params_class_mlp ( : : MLPHandle,
GenParamName : GenParamValue )

Return the regularization parameters of a multilayer perceptron.


get_regularization_params_class_mlp returns the regularization parameters of a multilayer
perceptron (MLP) that were specified with set_regularization_params_class_mlp. Further-
more, get_regularization_params_class_mlp returns the parameters that were determined by
an automatic determination of the regularization parameters. For a description of the parameters, see
set_regularization_params_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the regularization parameter to return.
Default: ’weight_prior’
List of values: GenParamName ∈ {’weight_prior’, ’noise_prior’, ’num_well_determined_params’,
’fraction_well_determined_params’, ’num_outer_iterations’, ’num_inner_iterations’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Value of the regularization parameter.
Result
If the parameters are valid, the operator get_regularization_params_class_mlp returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
set_regularization_params_class_mlp, read_class_mlp
Possible Successors
train_class_mlp
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 563

get_rejection_params_class_mlp ( : : MLPHandle,
GenParamName : GenParamValue )

Get the parameters of a rejection class.


get_rejection_params_class_mlp returns the rejection class parameters of a multilayer perceptron
(MLP) that were specified with set_rejection_params_class_mlp. For a description of the parame-
ters, see set_rejection_params_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters to return.
Default: ’sampling_strategy’
List of values: GenParamName ∈ {’sampling_strategy’, ’hyperbox_tolerance’, ’rejection_sample_factor’,
’random_seed’, ’rejection_class_index’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Values of the generic parameters.
Result
If the parameters are valid, the operator get_rejection_params_class_mlp returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp
Module
Foundation

get_sample_class_mlp ( : : MLPHandle, IndexSample : Features,


Target )

Return a training sample from the training data of a multilayer perceptron.


get_sample_class_mlp reads out a training sample from the multilayer perceptron (MLP) given by
MLPHandle that was added with add_sample_class_mlp or read_samples_class_mlp. The in-
dex of the sample is specified with IndexSample. The index is counted from 0, i.e., IndexSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_mlp. The training sample is returned in Features and Target. Features
is a feature vector of length NumInput, while Target is a target vector of length NumOutput (see
add_sample_class_mlp and create_class_mlp).
get_sample_class_mlp can, for example, be used to reclassify the training data with
classify_class_mlp in order to determine which training samples, if any, are classified incorrectly.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training sample.

HALCON 24.11.1.0
564 CHAPTER 7 CLASSIFICATION

. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real


Feature vector of the training sample.
. Target (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Target vector of the training sample.
Example

* Train an MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'canonical_variates', NumComp, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Reclassify the training samples
get_sample_num_class_mlp (MLPHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_mlp (MLPHandle, I, Data, Target)
classify_class_mlp (MLPHandle, Data, 1, Class, Confidence)
Result := gen_tuple_const(NumOut,0)
Result[Class] := 1
Diffs := Target-Result
if (sum(fabs(Diffs)) > 0)
* Sample has been classified incorrectly
endif
endfor

Result
If the parameters are valid, the operator get_sample_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp, get_sample_num_class_mlp
Possible Successors
classify_class_mlp, evaluate_class_mlp
See also
create_class_mlp
Module
Foundation

get_sample_num_class_mlp ( : : MLPHandle : NumSamples )

Return the number of training samples stored in the training data of a multilayer perceptron.
get_sample_num_class_mlp returns in NumSamples the number of training samples that are stored in
the multilayer perceptron (MLP) given by MLPHandle. get_sample_num_class_mlp should be called
before the individual training samples are accessed with get_sample_class_mlp, e.g., for the purpose of
reclassifying the training data (see get_sample_class_mlp).

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 565

Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If MLPHandle is valid, the operator get_sample_num_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp
Possible Successors
get_sample_class_mlp
See also
create_class_mlp
Module
Foundation

read_class_mlp ( : : FileName : MLPHandle )

Read a multilayer perceptron from a file.


read_class_mlp reads a multilayer perceptron (MLP) that has been stored with write_class_mlp.
Since the training of an MLP can consume a relatively long time, the MLP is typically trained in an of-
fline process and written to a file with write_class_mlp. In the online process the MLP is read with
read_class_mlp and subsequently used for evaluation with evaluate_class_mlp or for classification
with classify_class_mlp. The default HALCON file extension for the MLP classifier is ’gmc’.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
File extension: .gmc
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
Result
If the parameters are valid, the operator read_class_mlp returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
classify_class_mlp, evaluate_class_mlp, create_class_lut_mlp

HALCON 24.11.1.0
566 CHAPTER 7 CLASSIFICATION

Alternatives
read_dl_classifier
See also
create_class_mlp, write_class_mlp
Module
Foundation

read_samples_class_mlp ( : : MLPHandle, FileName : )

Read the training data of a multilayer perceptron from a file.


read_samples_class_mlp reads training samples from the file given by FileName and adds them to
the training samples that have already been added to the multilayer perceptron (MLP) given by MLPHandle.
The MLP must be created with create_class_mlp before calling read_samples_class_mlp.
As described with train_class_mlp and write_samples_class_mlp, the operators
read_samples_class_mlp, add_sample_class_mlp, and write_samples_class_mlp can
be used to build up a extensive set of training samples, and hence to improve the performance of the MLP by
retraining the MLP with extended data sets.
It should be noted that the training samples must have the correct dimensionality. The feature vectors and tar-
get vectors stored in FileName must have the lengths NumInput and NumOutput that were specified with
create_class_mlp. If this is not the case an error message is returned.
Parameters

. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle


MLP handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
Result
If the parameters are valid, the operator read_samples_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator modifies the state of the following input parameter:


• MLPHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp
Alternatives
add_sample_class_mlp
See also
write_samples_class_mlp, clear_samples_class_mlp
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 567

select_feature_set_mlp ( : : ClassTrainDataHandle, SelectionMethod,


GenParamName, GenParamValue : MLPHandle, SelectedFeatureIndices,
Score )

Selects an optimal combination of features to classify the provided data.


select_feature_set_mlp selects an optimal subset from a set of features to solve a given clas-
sification problem. The classification problem has to be specified with annotated training data in
ClassTrainDataHandle and will be classified by a Multilayer Perceptron. Details of the properties of this
classifier can be found in create_class_mlp.
The result of the operator is a trained classifier that is returned in MLPHandle. Additionally, the list of indices or
names of the selected features is returned in SelectedFeatureIndices. To use this classifier, calculate for
new input data all features mentioned in SelectedFeatureIndices and pass them to the classifier.
A possible application of this operator can be a comparison of different parameter sets for certain feature extraction
techniques. Another application is to search for a feature that is discriminating between different classes.
To define the features that should be selected from ClassTrainDataHandle, the dimensions
of the feature vectors in ClassTrainDataHandle can be grouped into subfeatures by calling
set_feature_lengths_class_train_data. A subfeature can contain several subsequent elements of
a feature vector. select_feature_set_mlp decides for each of these subfeatures, if it is better to use it for
the classification or leave it out.
The indices of the selected subfeatures are returned in SelectedFeatureIndices. If names were set
in set_feature_lengths_class_train_data, these names are returned instead of the indices. If
set_feature_lengths_class_train_data was not called for ClassTrainDataHandle before,
each element of the feature vector is considered as a subfeature.
The selection method SelectionMethod is either a greedy search ’greedy’ (iteratively add the feature with
highest gain) or the dynamically oscillating search ’greedy_oscillating’ (add the feature with highest gain and test
then if any of the already added features can be left out without great loss). The method ’greedy’ is generally
preferable, since it is faster. Only in cases when the subfeatures are low-dimensional or redundant, the method
’greedy_oscillating’ should be chosen.
The optimization criterion is the classification rate of a two-fold cross-validation of the training data. The best
achieved value is returned in Score.
With GenParamName and GenParamValue the number of hidden layer can be set by ’num_hidden’. The
default value is 80. Larger values for this parameter lead to longer classification times, while it allows a more
expressive classifier.
Attention
This operator may take considerable time, depending on the size of the data and the number of features.
Please note, that this operator should not be called, if only a small set of training data is available. Due to the risk of
overfitting the operator select_feature_set_mlp may deliver a classifier with a very high score. However,
the classifier may perform poorly when tested.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. SelectionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to perform the selection.
Default: ’greedy’
List of values: SelectionMethod ∈ {’greedy’, ’greedy_oscillating’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of generic parameters to configure the selection process and the classifier.
Default: []
List of values: GenParamName ∈ {’num_hidden’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of generic parameters to configure the selection process and the classifier.
Default: []
Suggested values: GenParamValue ∈ {50, 80, 100}
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
A trained MLP classifier using only the selected features.

HALCON 24.11.1.0
568 CHAPTER 7 CLASSIFICATION

. SelectedFeatureIndices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string


The selected feature set, contains indices referring.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The achieved score using two-fold cross-validation.
Example

* Find out which of the two features distinguishes two Classes


NameFeature1 := 'Good Feature'
NameFeature2 := 'Bad Feature'
LengthFeature1 := 3
LengthFeature2 := 2
* Create training data
create_class_train_data (LengthFeature1+LengthFeature2,\
ClassTrainDataHandle)
* Define the features which are in the training data
set_feature_lengths_class_train_data (ClassTrainDataHandle, [LengthFeature1,\
LengthFeature2], [NameFeature1, NameFeature2])
* Add training data
* |Feat1| |Feat2|
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 2,1 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 2,1 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 3,4 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 3,4 ], 1)
* Add more data
* ...
* Select the better feature with a MLP
select_feature_set_mlp (ClassTrainDataHandle, 'greedy', [], [], MLPHandle,\
SelectedFeatureMLP, Score)
* Use the classifier
* ...

Result
If the parameters are valid, the operator select_feature_set_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_class_train_data, add_sample_class_train_data,
set_feature_lengths_class_train_data
Possible Successors
classify_class_mlp
Alternatives
select_feature_set_knn, select_feature_set_svm, select_feature_set_gmm
See also
select_feature_set_trainf_mlp, gray_features, region_features
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 569

serialize_class_mlp ( : : MLPHandle : SerializedItemHandle )

Serialize a multilayer perceptron (MLP).


serialize_class_mlp serializes a multilayer perceptron (MLP) and its stored training samples (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data that
is written in a file by write_class_mlp and write_samples_class_mlp is converted to a serialized
item. The multilayer perceptron is defined by the handle MLPHandle. The serialized multilayer perceptron is
returned by the handle SerializedItemHandle and can be deserialized by deserialize_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_class_mlp returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_mlp
Possible Successors
clear_class_mlp, fwrite_serialized_item, send_serialized_item,
deserialize_class_mlp
See also
create_class_mlp, read_class_mlp, write_samples_class_mlp,
deserialize_class_mlp
Module
Foundation

set_regularization_params_class_mlp ( : : MLPHandle,
GenParamName, GenParamValue : )

Set the regularization parameters of a multilayer perceptron.


set_regularization_params_class_mlp sets the regularization parameters of the multilayer percep-
tron (MLP) passed in MLPHandle. The regularization parameter to be set is specified with GenParamName. Its
value is specified with GenParamValue.
GenParamName can assume the following values:

’num_outer_iterations’: This parameter determines whether the regularization parameters should be determined
automatically (GenParamValue >= 1) or manually (GenParamValue = 0, default), as described be-
low in the sections “Technical Background” and “Automatic Determination of the Regularization Parame-
ters”. As described in detail in the section “Automatic Determination of the Regularization Parameters”,
’num_outer_iterations’ should not be set too large (in the range of 1 to 5) to enable manual checking of the
convergence of the automatic determination of the regularization parameters.
’num_inner_iterations’: This parameter potentially enables somewhat faster convergence of the automatic deter-
mination of the regularization parameters, as described below in the section “Automatic Determination of the
Regularization Parameters”. It should typically be left at its default value of 1.

HALCON 24.11.1.0
570 CHAPTER 7 CLASSIFICATION

’weight_prior’: On the one hand, this selects the regularization model to be used, as described below in the section
“Technical Background”. On the other hand, if manual determination of the regularization parameters has
been selected (i.e., ’num_outer_iterations’ = 0), the regularization parameters are set with GenParamName,
whereas the initial values of the regularization parameters are set if automatic determination of the regular-
ization parameters has been selected (i.e., ’num_outer_iterations’ >= 1), as described below in the section
“Automatic Determination of the Regularization Parameters”. Manual determination of the regularization
parameters (see the section “Regularization Parameters” below) is only realistic if a single regularization
parameter is used. In all other cases, the regularization parameters should be determined automatically.
’noise_prior’: This allows to specify a noise prior for MLPs that have been configured for regression, as described
below in the section “Application Areas”. If manual determination of the regularization parameters has been
selected, the noise prior is set with GenParamName, whereas the initial value of the noise prior is set if
automatic determination of the regularization parameters has been selected. Typically, it is only useful to use
this parameter if the regularization parameters are determined automatically.

Please note that the automatic determination of the regularization parameters requires a very large amount of
memory and runtime, as described in detail in the section “Complexity” below. Therefore, NumHidden should
not be selected too large when the MLP is created with create_class_mlp. For example, normal OCR
applications seldom require NumHidden to be larger than 30-60.
Application Areas
As described at create_class_mlp, it may be desirable to regularize the MLP to enforce a smoother transition
of the confidences between the different classes and to prevent overfitting of the MLP to the training data. To
achieve this, a penalty for large MLP weights (which are the main reason for very sharp transitions between classes)
can be added to the training of the MLP in train_class_mlp by setting GenParamName to ’weight_prior’
and setting GenParamValue to a value > 0.
If the MLP has been configured for regression (i.e., if OutputFunction was set to ’linear’ in
create_class_mlp), an inverse variance of the expected noise in the data can be specified by setting
GenParamName to ’noise_prior’ and setting GenParamValue to a value > 0. Setting the noise prior only
has an effect if a weight prior has been specified. In this case, it can be used to weight the data error term (the
output error of the MLP) against the weight error term.
As described in more detail below, the regularization parameters of the MLP may be determined automatically (at
the expense of significantly increased training times) by setting GenParamName to ’num_outer_iterations’ and
setting GenParamValue to a value > 0.
Technical Background
There are three different kinds of penalty terms that can be set with ’weight_prior’. Note that in the fol-
(l) (l)
lowing the parameters wji and bk refer to the weights of the different layers of the MLP, as described in
create_class_mlp.
If a single value α is specified, all MLP weights are penalized equally by adding the following term to the opti-
mization in train_class_mlp:
 
nh
ni X nh nh X
no no
α X (1) 2
 X (1) 2
 X (2) 2
 X (2) 2 

EW = w + bj + wkj + bk
2 i=1 j=1 ji j=1 j=1 k=1 k=1

Alternatively, four values [αw1 , αb1 , αw2 , αb2 ] can be specified. These four parameters enable the individual regu-
larization of the four groups of weights:

ni Xnh nh nh X
no no
αw1 X (1) 2 αb1 X (1) 2 αw2 X (2) 2 αb2 X (2) 2
EW = wji + bj + wkj + bk
2 i=1 j=1 2 j=1 2 j=1 2
k=1 k=1

Finally, ni + 3 values [α1 , . . . , αni , αb1 , αw2 , αb2 ] can be specified. These ni + 3 parameters enable the individual
regularization of each input variable x1 , . . . , xni and the regularization of the remaining three groups of weights:

ni nh nh nh X
no no
X αi X (1) 2 αb1 X (1) 2 αw2 X (2) 2 αb2 X (2) 2
EW = wji + b + wkj + bk
i=1
2 j=1
2 j=1 j 2 j=1 2
k=1 k=1

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 571

This kind of regularization is only useful in conjunction with the automatic determination of the regularization
parameters described below. If the automatic determination of the regularization parameters returns a very large
value of αj (compared to the smallest value of the ni values αi ), the corresponding input variable has little rele-
vance for the MLP output. If this is the case, it should be tested whether the input variable can be omitted from the
input of the MLP without negatively affecting the MLP’s performance. The advantage of omitting irrelevant input
variables is an increased speed of the MLP for classification.
The parameters α can be regarded as the inverse variance of a Gaussian prior distribution on the MLP weights, i.e.,
they express an expectation about the size of the MLP weights. The larger the α are chosen, the smaller the MLP
weights will be.
Regularization Parameters
The larger the regularization parameter(s) ’weight_prior’ are chosen, the smoother the transition of the confidences
between the different classes will be. The required values for the regularization parameter(s) depend on the MLP,
especially the number of hidden units, the training data, and the scale of the training data (if no normalization
is used). Typically, a higher value for the regularization parameter(s) is necessary if the MLP has more hidden
units and if the training data consists of more points. For typical applications, the regularization parameters are
determined by verifying the MLP performance on a test data set that is independent from the training data set. If
an independent test data set is unavailable, cross validation can be used. Cross validation works by splitting the
data set into separate parts (for example, 80% of the data set for training and 20% for testing), training the MLP
with the training data set (the 80% of the data in the above example), and testing the MLP performance on the
test set (the 20% of the data in the above example). The procedure can be repeated for the other possible splits
of the data (in the 80%–20% example, there are five possible splits). This procedure can, for example, start with
relatively large values of the weight regularization parameters (which will typically result in misclassifications on
the test data set). The weight regularization parameters can then be decreased until an acceptable performance on
the test data sets is reached.
Automatic Determination of the Regularization Parameters
The regularization parameters, i.e., the weight priors and the noise prior, can also be determined automati-
cally by train_class_mlp using the so-called evidence procedure (for details about the evidence procedure,
please refer to the articles in the section “References” below). This training mode can be selected by setting
GenParamName to ’num_outer_iterations’ and setting GenParamValue to a value > 0. Note that this typically
results in training times that are one to three orders of magnitude larger than simply training the MLP with fixed
regularization parameters.
The evidence procedure is an iterative algorithm that performs the following two steps for a number of outer itera-
tions: first, the network is trained using the current values of the regularization parameters; next, the regularization
parameters are re-estimated using the weights of the optimized MLP. In the first iteration, the weight priors and
noise priors specified with ’weight_prior’ and ’noise_prior’ are used. Thus, for the automatic determination of the
regularization parameters, the values specified by the user serve as the starting parameters for the evidence proce-
dure. The starting parameters for the weight priors should not be set too large because this might over-regularize
the training and may result in badly determined regularization parameters. The initial values for the weight priors
should typically be in the range 0.01-0.1.
The number of outer iterations can be set by setting GenParamName to ’num_outer_iterations’ and setting
GenParamValue to a value > 0. If GenParamValue is set to 0 (this is the default value), the evidence proce-
dure is not executed and the MLP is simply trained using the user-specified regularization parameters.
The number of outer iterations should be set high enough to ensure the convergence of the regularization parame-
ters. In contrast to the training of the MLP’s weights, a numerical convergence criterion is typically very difficult
to specify and some human judgment is typically required to decide whether the regularization parameters have
converged sufficiently. Therefore, it might not be possible to set the number of outer iterations a-priori to ensure
convergence of the regularization parameters. In these cases, the outer loop over the steps of the evidence pro-
cedure can be implemented manually by setting ’num_outer_iterations’ to 1 and calling train_class_mlp
repeatedly. This has the advantage that the weight priors and noise prior can be queried after each iteration and can
be checked manually for convergence. In this approach, the performance of the MLP can even be checked after
each iteration on an independent test set to check the generalization performance of the classifier.
If the number of outer iterations has been determined (approximately) for a class of applications, it may be possible
to reduce the run time of the training (if MLPs should be trained in the future with similar data sets) by setting
GenParamName to ’num_inner_iterations’ and setting GenParamValue to a value > 1 (the default value is 1)
and by reducing the number of outer iterations. The number of outer iterations can typically not be reduced by the
same factor by which the number of inner iterations is increased. Using this approach, the run time of the training

HALCON 24.11.1.0
572 CHAPTER 7 CLASSIFICATION

can be optimized. However, this approach is only useful if many MLPs are trained with similar data sets. If this is
not the case, ’num_inner_iterations’ should be left at its default value of 1.
The automatically determined weight priors and noise prior can be queried after the training us-
ing get_regularization_params_class_mlp by setting GenParamName to ’weight_prior’ or
’noise_prior’, respectively.
In addition to the weight prior and noise prior, the evidence procedure determines an estimate of the
number of parameters of the MLP that can be determined well using the training data. This re-
sult can be queried using get_regularization_params_class_mlp by setting GenParamName to
’num_well_determined_params’. Alternatively, the fraction of well-determined parameters can be queried by
setting GenParamName to ’fraction_well_determined_params’. If the number of well-determined parameters is
significantly smaller than nw (where nw is the number of weights in the MLP, as described in the section “Com-
plexity” below) or the fraction of well-determined parameters is significantly smaller than 1, consider reducing the
number of hidden units or, if the number of hidden units cannot be decreased without increasing the error rate of
the MLP significantly, consider performing a preprocessing that reduces the number of input variables to the net,
i.e., canonical variates or principal components.
Please note that the number of well-determined parameters can only be determined after the weight priors and
noise prior have been determined. This is the reason why the evidence procedure ends with the determination of
the regularization parameters and not with the training of the MLP weights. Hence, after the evidence procedure
the MLP will not have been trained with the latest regularization parameters. This should make no difference if
they have converged. If you want the training to end with an optimization of the weights using the latest values of
the regularization parameters, you can set ’num_outer_iterations’ to 0 and can call train_class_mlp again.
If you do so, please note, however, that the number of well-determined parameters may change and, therefore, the
value returned by get_regularization_params_class_mlp is technically inconsistent.
Saved Parameters
Note that the parameters ’num_outer_iterations’ and ’num_inner_iterations’ only affect the training of
the MLP. Therefore, they are not saved when the MLP is stored using write_class_mlp or
serialize_class_mlp. Thus, they must be set anew if the MLP is loaded again using read_class_mlp
or deserialize_class_mlp and if training using the automatic determination of the regularization
parameters should be continued. All other parameters described above (’weight_prior’, ’noise_prior’,
’num_well_determined_params’, and ’fraction_well_determined_params’) are saved.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the regularization parameter to set.
Default: ’weight_prior’
List of values: GenParamName ∈ {’weight_prior’, ’noise_prior’, ’num_outer_iterations’,
’num_inner_iterations’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Value of the regularization parameter.
Default: 1.0
Suggested values: GenParamValue ∈ {0.01, 0.1, 1.0, 10.0, 100.0, 0, 1, 2, 3, 5, 10, 15, 20}
Example

* This example shows how to determine the regularization parameters


* automatically without examining the convergence of the
* regularization parameters.
* Create the MLP.
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'normalization', NumIn, 42, MLPHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
* Generate training features and classes.
* Data = [...]
* Class = [...]
add_sample_class_mlp (MLPHandle, Data, Class)

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 573

endfor
* Set up the automatic determination of the regularization
* parameters.
set_regularization_params_class_mlp (MLPHandle, 'weight_prior', \
[0.01,0.01,0.01,0.01])
set_regularization_params_class_mlp (MLPHandle, \
'num_outer_iterations', 10)
* Train the MLP.
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Read out the estimate of the number of well-determined
* parameters.
get_regularization_params_class_mlp (MLPHandle, \
'fraction_well_determined_params', \
FractionParams)
* If FractionParams differs substantially from 1, consider reducing
* NumHidden appropriately and consider performing a preprocessing that
* reduces the number of input variables to the net, i.e., canonical
* variates or principal components.
write_class_mlp (MLPHandle, 'classifier.mlp')

* This example shows how to determine the regularization parameters


* automatically while examining the convergence of the
* regularization parameters.
* Create the MLP.
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'normalization', NumIn, 42, MLPHandle)
* Generate and add the training data.
for J := 0 to NumData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = [...]
add_sample_class_mlp (MLPHandle, Data, Class)
endfor
* Set up the automatic determination of the regularization
* parameters.
set_regularization_params_class_mlp (MLPHandle, 'weight_prior', \
[0.01,0.01,0.01,0.01])
set_regularization_params_class_mlp (MLPHandle, \
'num_outer_iterations', 1)
for OuterIt := 1 to 10 by 1
* Train the MLP
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Read out the regularization parameters
get_regularization_params_class_mlp (MLPHandle, 'weight_prior', \
WeightPrior)
* Inspect the regularization parameters manually for
* convergence and exit the loop manually if they have
* converged.
* [...]
endfor
* Read out the estimate of the number of well-determined
* parameters.
get_regularization_params_class_mlp (MLPHandle,\
'fraction_well_determined_params',\
FractionParams)
* If FractionParams differs substantially from 1, consider reducing
* NumHidden appropriately and consider performing a preprocessing that

HALCON 24.11.1.0
574 CHAPTER 7 CLASSIFICATION

* reduces the number of input variables to the net, i.e., canonical


* variates or principal components.
write_class_mlp (MLPHandle, 'classifier.mlp')

Complexity
Let ni denote the number of input units of the MLP (i.e., ni = NumHidden or ni = NumComponents,
depending on the value of Preprocessing, as described at create_class_mlp), nh the number of hidden
units, and no the number of output units. Then, the number of weights of the MLP is nw = (ni +1)nh +(nh +1)no .
Let nd denote the number of training samples. Let nM denote the number of iterations set with MaxIterations
in train_class_mlp. Let nO and nI denote the number of outer and inner iterations, respectively.
The run time of the training without regularization or with regularization with fixed regularization parameters is of
complexity O(nM nw nd ). In contrast, the runtime of the training with automatic determination of the regularization
parameters is of complexity
O(nO nM nw nd ) + O(nO n2w nd ) + O(nO n3w ) + O(nO nI n3w ).
The training without regularization or with regularization with fixed regularization parameters requires at least
48nw + 24nh nd + 16no nd bytes of memory. The training with automatic determination of the regularization
parameters requires at least 24n2w + 48nw + 72nh nd + 56no nd bytes of memory. Under special circumstances,
another 24n2w + 8nw bytes of memory are required.
Result
If the parameters are valid, the operator set_regularization_params_class_mlp returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

• MLPHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_mlp
Possible Successors
get_regularization_params_class_mlp, train_class_mlp
References
David J. C. MacKay: “Bayesian Interpolation”; Neural Computation 4(3):415-447; 1992.
David J. C. MacKay: “A Practical Bayesian Framework for Backpropagation Networks”; Neural Computation
4(3):448-472; 1992.
David J. C. MacKay: “The Evidence Framework Applied to Classification Networks”; Neural Computation 4(5):
720-736; 1992.
David J. C. MacKay: “Comparison of Approximate Methods for Handling Hyperparameters”; Neural Computation
11(5):1035-1068; 1999.
Module
Foundation

set_rejection_params_class_mlp ( : : MLPHandle, GenParamName,


GenParamValue : )

Set the parameters of a rejection class.

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 575

set_rejection_params_class_mlp sets the parameters of an automatically generated rejection class in-


side of a multilayer perceptron (MLP) given by MLPHandle. In some applications, it is desirable to know whether
a feature vector is similar to one of the training set. If a feature vector lies outside of the provided training set, it
should be classified as a special rejection class. This means that the feature vector is different to the confidence
area of the classifier. If a rejection class should be created automatically, an additional class must be specified
while creating the classifier in create_class_mlp. Here, the parameter NumOutput must be increased by
one.
The parameters of the rejection class are selected with GenParamName and the respective values with
GenParamValue.

’rejection_class_index’: By default, the last class serves as the rejection class. If another class should be used,
GenParamName must be set to ’rejection_class_index’ and GenParamValue to the class index.
’sampling_strategy’: Currently, three strategies exist to generate samples for the rejection class during the
training of the MLP. These strategies can be selected by setting GenParamName to ’sampling_strategy’
and GenParamValue to ’hyperbox_around_all_classes’, ’hyperbox_around_each_class’, or ’hyper-
box_ring_around_each_class’. The sampling strategy ’hyperbox_around_all_classes’ takes the bound-
ing box of all training samples that have been provided so far. The sampling strategy ’hyper-
box_around_each_class’ is similar with the only difference that the bounding box around each class
is taken as the area where the rejection samples are generated. The sampling strategy ’hyper-
box_ring_around_each_class’ generates samples only in the enlarged areas around the bounding box of each
class, thus generating a hyperbox ring around the original samples. Please note that with increasing dimen-
sionality the sampling strategies ’hyperbox_around_each_class’ and ’hyperbox_ring_around_each_class’
provide the same result. If no rejection class sampling strategy should be used, which is the default,
GenParamValue must be set to ’no_rejection_class’.
’hyperbox_tolerance’: The factor ’hyperbox_tolerance’ describes by what amount the bounding box should be
enlarged in all dimensions. Then, inside this box samples are randomly generated from a uniform distribution.
The default value is 0.2.
’rejection_sample_factor’: The number of rejection samples is the number of provided samples multiplied by
’rejection_sample_factor’. If not enough samples are generated, the rejection class may not be classified
correctly. If the rejection class has too many samples, the normal classes are classified as rejection class. The
default value is 1.0. Note that the training time will increase by a factor of 1 + f , where f is the value of
’rejection_sample_factor’.
’random_seed’: To ensure reproducible results, a random seed can be set with ’random_seed’. The default value
is 42.

Because this operator only parametrizes the training of the MLP, the values are not saved by write_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: ’sampling_strategy’
List of values: GenParamName ∈ {’sampling_strategy’, ’hyperbox_tolerance’, ’rejection_sample_factor’,
’random_seed’, ’rejection_class_index’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Values of the generic parameters.
Default: ’hyperbox_around_all_classes’
List of values: GenParamValue ∈ {’no_rejection_class’, ’hyperbox_around_all_classes’,
’hyperbox_around_each_class’, ’hyperbox_ring_around_each_class’}
Result
If the parameters are valid, the operator set_rejection_params_class_mlp returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON 24.11.1.0
576 CHAPTER 7 CLASSIFICATION

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
This operator modifies the state of the following input parameter:
• MLPHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp
Module
Foundation

train_class_mlp ( : : MLPHandle, MaxIterations, WeightTolerance,


ErrorTolerance : Error, ErrorLog )

Train a multilayer perceptron.


train_class_mlp trains the multilayer perceptron (MLP) given in MLPHandle. Before the MLP
can be trained, all training samples to be used for the training must be stored in the MLP using
add_sample_class_mlp or read_samples_class_mlp. If after the training new additional training
samples should be used a new MLP must be created with create_class_mlp, in which again all training sam-
ples to be used (i.e., the original ones and the additional ones) must be stored. In these cases, it is useful to save and
read the training data with write_samples_class_mlp and read_samples_class_mlp, respectively.
A second training with additional training samples is not explicitly forbidden by train_class_mlp. However,
this typically does not lead to good results because the training of an MLP is a complex nonlinear optimization
problem, and consequently the second training with new data will very likely lead to the fact that the optimization
gets stuck in a local minimum.
If a rejection class has been specified using set_rejection_params_class_mlp, before the actual training
the samples for the rejection class are generated.
During the training, the error the MLP achieves on the stored training samples is mini-
mized by using a nonlinear optimization algorithm. If the MLP has been regularized with
set_regularization_params_class_mlp, an additional weight penalty term is taken into
account. With this, the MLP weights described in create_class_mlp are determined. Fur-
thermore, if an automatic determination of the regularization parameters has been specified with
set_regularization_params_class_mlp, these parameters are optimized as well. As described
at set_regularization_params_class_mlp, training the MLP with automatic determination of the
regularization parameters requires significantly more time than training an unregularized MLP or an MLP with
fixed regularization parameters.
create_class_mlp initializes the MLP weights with random values to make it very likely that the optimization
converges to the global minimum of the error function. Nevertheless, in rare cases it may happen that the random
values determined with RandSeed in create_class_mlp result in a relatively large optimum error, i.e., that
the optimization gets stuck in a local minimum. If it can be conjectured that this has happened the MLP should be
created anew with a different value for RandSeed in order to check whether a significantly smaller error can be
achieved.
The parameters MaxIterations, WeightTolerance, and ErrorTolerance control the nonlinear opti-
mization algorithm. Note that if an automatic determination of the regularization parameters has been specified
with set_regularization_params_class_mlp, these parameters refer to one training within one step
of the evidence procedure. MaxIterations specifies the maximum number of iterations of the optimization
algorithm. In practice, values between 100 and 200 should be sufficient for most problems. WeightTolerance
specifies a threshold for the change of the weights per iteration. Here, the absolute value of the change of the
weights between two iterations is summed. Hence, this value depends on the number of weights as well as the size
of the weights, which in turn depend on the scaling of the training data. Typically, values between 0.00001 and
1 should be used. ErrorTolerance specifies a threshold for the change of the error value per iteration. This

HALCON/HDevelop Reference Manual, 2024-11-13


7.5. NEURAL NETS 577

value depends on the number of training samples as well as the number of output variables of the MLP. Also here,
values between 0.00001 and 1 should typically be used. The optimization is terminated if the weight change is
smaller than WeightTolerance and the change of the error value is smaller than ErrorTolerance. In any
case, the optimization is terminated after at most MaxIterations iterations. It should be noted that, depending
on the size of the MLP and the number of training samples, the training can take from a few seconds to several
hours.
On output, train_class_mlp returns the error of the MLP with the optimal weights on the training samples
in Error. Furthermore, ErrorLog contains the error value as a function of the number of iterations. With
this, it is possible to decide whether a second training of the MLP with the same training data without creating
the MLP anew makes sense. If ErrorLog is regarded as a function, it should drop off steeply initially, while
leveling out very flatly at the end. If ErrorLog is still relatively steep at the end, it usually makes sense to call
train_class_mlp again. It should be noted, however, that this mechanism should not be used to train the
MLP successively with MaxIterations = 1 (or other small values for MaxIterations) because this will
substantially increase the number of iterations required to train the MLP. Note that if an automatic determination of
the regularization parameters has been specified with set_regularization_params_class_mlp, Error
and ErrorLog refer to the last training that was executed in the evidence procedure. If the error log should be
monitored within the individual iterations of the evidence procedure, the outer iteration of the evidence procedure
must be implemented explicitly, as described at set_regularization_params_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. MaxIterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum number of iterations of the optimization algorithm.
Default: 200
Suggested values: MaxIterations ∈ {20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280,
300}
. WeightTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm.
Default: 1.0
Suggested values: WeightTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction: WeightTolerance >= 1.0e-8
. ErrorTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the
optimization algorithm.
Default: 0.01
Suggested values: ErrorTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction: ErrorTolerance >= 1.0e-8
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Mean error of the MLP on the training data.
. ErrorLog (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Mean error of the MLP on the training data as a function of the number of iterations of the optimization
algorithm.
Example

* Train an MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'normalization', 1, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, 'classifier.mlp')

Result
If the parameters are valid, the operator train_class_mlp returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
train_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing = ’canon-
ical_variates’ is used. This typically indicates that not enough training samples have been stored for each class.

HALCON 24.11.1.0
578 CHAPTER 7 CLASSIFICATION

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator modifies the state of the following input parameter:
• MLPHandle

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp,
set_regularization_params_class_mlp
Possible Successors
evaluate_class_mlp, classify_class_mlp, write_class_mlp, create_class_lut_mlp
Alternatives
train_dl_classifier_batch, read_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

write_class_mlp ( : : MLPHandle, FileName : )

Write a multilayer perceptron to a file.


write_class_mlp writes the multilayer perceptron (MLP) MLPHandle to the file given by FileName. The
default HALCON file extension for the MLP classifier is ’gmc’. write_class_mlp is typically called af-
ter the MLP has been trained with train_class_mlp. The MLP can be read with read_class_mlp.
write_class_mlp does not write any training samples that possibly have been stored in the MLP. For this
purpose, write_samples_class_mlp should be used.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .gmc
Result
If the parameters are valid, the operator write_class_mlp returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 579

Possible Predecessors
train_class_mlp
Possible Successors
clear_class_mlp
See also
create_class_mlp, read_class_mlp, write_samples_class_mlp
Module
Foundation

write_samples_class_mlp ( : : MLPHandle, FileName : )

Write the training data of a multilayer perceptron to a file.


write_samples_class_mlp writes the training samples stored in the multilayer perceptron (MLP)
MLPHandle to the file given by FileName. write_samples_class_mlp can be used to build up a
database of training samples, and hence to improve the performance of the MLP by training it with an ex-
tended data set (see train_class_mlp). For other possible uses of write_samples_class_mlp see
get_prep_info_class_mlp.
The file FileName is overwritten by write_samples_class_mlp. Nevertheless, extending the database of
training samples is easy to do because read_samples_class_mlp and add_sample_class_mlp add the
training samples to the training samples that are already stored in memory with the MLP.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
Result
If the parameters are valid, the operator write_samples_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_mlp
Possible Successors
clear_samples_class_mlp
See also
create_class_mlp, get_prep_info_class_mlp, read_samples_class_mlp
Module
Foundation

7.6 Support Vector Machines

add_class_train_data_svm ( : : SVMHandle,
ClassTrainDataHandle : )

Add training data to a support vector machine (SVM).

HALCON 24.11.1.0
580 CHAPTER 7 CLASSIFICATION

add_class_train_data_svm adds the training data specified by ClassTrainDataHandle to a support


vector machine (SVM) specified by SVMHandle.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
Handle of a SVM which receives the training data.
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Training data for a classifier.
Result
If the parameters are valid, the operator add_class_train_data_svm returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:

• SVMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_svm, create_class_train_data
Possible Successors
get_sample_class_svm
Alternatives
add_sample_class_svm
See also
create_class_svm
Module
Foundation

add_sample_class_svm ( : : SVMHandle, Features, Class : )

Add a training sample to the training data of a support vector machine.


add_sample_class_svm adds a training sample to the support vector machine (SVM) given by SVMHandle.
The training sample is given by Features and Class. Features is the feature vector of the sample, and conse-
quently must be a real vector of length NumFeatures, as specified in create_class_svm. Class is the tar-
get of the sample, which must be in the range of 0 to NumClasses-1 (see create_class_svm). In the special
case of ’novelty-detection’ the class is to be set to 0 as only one class is assumed. Before the SVM can be trained
with train_class_svm, training samples must be added to the SVM with add_sample_class_svm. The
usage of support vectors of an already trained SVM as training samples is described in train_class_svm.
The number of currently stored training samples can be queried with get_sample_num_class_svm. Stored
training samples can be read out again with get_sample_class_svm.
Normally, it is useful to save the training samples in a file with write_samples_class_svm to facilitate
reusing the samples and to facilitate that, if necessary, new training samples can be added to the data set, and hence
to facilitate that a newly created SVM can be trained with the extended data set.

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 581

Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample to be stored.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Class of the training sample to be stored.
Result
If the parameters are valid the operator add_sample_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SVMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_svm
Possible Successors
train_class_svm, write_samples_class_svm, get_sample_num_class_svm,
get_sample_class_svm
Alternatives
read_samples_class_svm
See also
clear_samples_class_svm, get_support_vector_class_svm
Module
Foundation

classify_class_svm ( : : SVMHandle, Features, Num : Class )

Classify a feature vector by a support vector machine.


classify_class_svm computes the best Num classes of the feature vector Features with the SVM
SVMHandle and returns them in Class. If the classifier was created in the Mode = ’one-versus-one’, the
classes are ordered by the number of votes of the sub-classifiers. If Mode = ’one-versus-all’ was used, the classes
are ordered by the value of each sub-classifier (see create_class_svm for more details). If the classifier was
created in the Mode = ’novelty-detection’, it determines whether the feature vector belongs to the same class as
the training data (Class = 1) or is regarded as outlier (Class = 0). In this case Num must be set to 1 as the
classifier only determines membership.
Before calling classify_class_svm, the SVM must be trained with train_class_svm.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.

HALCON 24.11.1.0
582 CHAPTER 7 CLASSIFICATION

. Num (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer


Number of best classes to determine.
Default: 1
Suggested values: Num ∈ {1, 2, 3, 4, 5}
. Class (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Result of classifying the feature vector with the SVM.
Result
If the parameters are valid the operator classify_class_svm returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_svm, read_class_svm
Alternatives
apply_dl_classifier
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.
Module
Foundation

clear_class_svm ( : : SVMHandle : )

Clear a support vector machine.


clear_class_svm clears the support vector machine (SVM) given by SVMHandle and frees all mem-
ory required for the SVM. After calling clear_class_svm, the SVM can no longer be used. The handle
SVMHandle becomes invalid.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm(-array) ; handle
SVM handle.
Result
If SVMHandle is valid the operator clear_class_svm returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SVMHandle

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 583

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
classify_class_svm
See also
create_class_svm, read_class_svm, write_class_svm, train_class_svm
Module
Foundation

clear_samples_class_svm ( : : SVMHandle : )

Clear the training data of a support vector machine.


clear_samples_class_svm clears all training samples that have been added to the support vec-
tor machine (SVM) SVMHandle with add_sample_class_svm or read_samples_class_svm.
clear_samples_class_svm should only be used if the SVM is trained in the same process that uses the
SVM for classification with classify_class_svm. In this case, the memory required for the training samples
can be freed with clear_samples_class_svm, and hence memory can be saved. In the normal usage, in
which the SVM is trained offline and written to a file with write_class_svm, it is typically unnecessary to call
clear_samples_class_svm because write_class_svm does not save the training samples, and hence
the online process, which reads the SVM with read_class_svm, requires no memory for the training samples.
Parameters

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm(-array) ; handle


SVM handle.
Result
If the parameters are valid the operator clear_samples_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SVMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
train_class_svm, write_samples_class_svm
See also
create_class_svm, clear_class_svm, add_sample_class_svm, read_samples_class_svm
Module
Foundation

create_class_svm ( : : NumFeatures, KernelType, KernelParam, Nu,


NumClasses, Mode, Preprocessing, NumComponents : SVMHandle )

Create a support vector machine for pattern classification.


create_class_svm creates a support vector machine that can be used for pattern classification. The dimension
of the patterns to be classified is specified in NumFeatures, the number of different classes in NumClasses.

HALCON 24.11.1.0
584 CHAPTER 7 CLASSIFICATION

For a binary classification problem in which the classes are linearly separable the SVM algorithm selects data
vectors from the training set that are utilized to construct the optimal separating hyperplane between different
classes. This hyperplane is optimal in the sense that the margin between the convex hulls of the different classes
is maximized. The training patterns that are located at the margin define the hyperplane and are called support
vectors (SV).
Classification of a feature vector z is performed with the following formula:

nsv
!
X
f (z) = sign αi yi < xi , z > +b
i=1

Here, xi are the support vectors, yi encodes their class membership (±1) and αi the weight coefficients. The dis-
tance of the hyperplane to the origin is b. The α and b are determined during training with train_class_svm.
Note that only a subset of the original training set (nsv : number of support vectors) is necessary for the definition
of the decision boundary and therefore data vectors that are not support vectors are discarded. The classification
speed depends on the evaluation of the dot product between support vectors and the feature vector to be classified,
and hence depends on the length of the feature vector and the number nsv of support vectors.
For classification problems in which the classes are not linearly separable the algorithm is extended in two ways.
First, during training a certain amount of errors (overlaps) is compensated with the use of slack variables. This
means that the α are upper bounded by a regularization constant. To enable an intuitive control of the amount of
training errors, the Nu-SVM version of the training algorithm is used. Here, the regularization parameter Nu is an
asymptotic upper bound on the number of training errors and an asymptotic lower bound on the number of support
vectors. As a rule of thumb, the parameter Nu should be set to the prior expectation of the application’s specific
error ratio, e.g., 0.01 (corresponding to a maximum training error of 1%). Please note that a too big value for Nu
might lead to an infeasible training problem, i.e., the SVM cannot be trained correctly (see train_class_svm
for more details). Since this can only be determined during training, an exception can only be raised there. In this
case, a new SVM with Nu chosen smaller must be created.
Second, because the above SVM exclusively calculates dot products between the feature vectors, it is possible to
incorporate a kernel function into the training and testing algorithm. This means that the dot products are substi-
tuted by a kernel function, which implicitly performs the dot product in a higher dimensional feature space. Given
the appropriate kernel transformation, an originally not linearly separable classification task becomes linearly sep-
arable in the higher dimensional feature space.
Different kernel functions can be selected with the parameter KernelType. For KernelType = ’linear’ the
dot product, as specified in the above formula is calculated. This kernel should solely be used for linearly or nearly
linearly separable classification tasks. The parameter KernelParam is ignored here.
The radial basis function (RBF) KernelType = ’rbf’ is the best choice for a kernel function because it achieves
good results for many classification tasks. It is defined as:

K(x, z) = e−γ· x−z

Here, the parameter KernelParam is used to select γ. The intuitive meaning of γ is the amount of influence of
a support vector upon its surroundings. A big value of γ (small influence on the surroundings) means that each
training vector becomes a support vector. The training algorithm learns the training data “by heart”, but lacks any
generalization ability (over-fitting). Additionally, the training/classification times grow significantly. A too small
value for γ (big influence on the surroundings) leads to few support vectors defining the separating hyperplane
(under-fitting). One typical strategy is to select a small γ-Nu pair and consecutively increase the values as long as
the recognition rate increases.
With KernelType = ’polynomial_homogeneous’ or ’polynomial_inhomogeneous’, polynomial kernels can be
selected. They are defined in the following way:

K(x, z) = (< x, z >)d


K(x, z) = (< x, z > +1)d

The degree of the polynomial kernel must be set with KernelParam. Please note that a too high degree polyno-
mial (d > 10) might result in numerical problems.

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 585

As a rule of thumb, the RBF kernel provides a good choice for most of the classification problems and should
therefore be used in almost all cases. Nevertheless, the linear and polynomial kernels might be better suited for
certain applications and can be tested for comparison. Please note that the novelty-detection Mode and the operator
reduce_class_svm are provided only for the RBF kernel.
Mode specifies the general classification task, which is either how to break down a multi-class decision problem to
binary sub-cases or whether to use a special classifier mode called ’novelty-detection’. Mode = ’one-versus-all’
creates a classifier where each class is compared to the rest of the training data. During testing the class with the
largest output (see the classification formula without sign) is chosen. Mode = ’one-versus-one’ creates a binary
classifier between each single class. During testing a vote is cast and the class with the majority of the votes
is selected. The optimal Mode for multi-class classification depends on the number of classes. Given n classes
’one-versus-all’ creates n classifiers, whereas ’one-versus-one’ creates n(n − 1)/2. Note that for a binary decision
task ’one-versus-one’ would create exactly one, whereas ’one-versus-all’ unnecessarily creates two symmetric
classifiers. For few classes (approximately up to 10) ’one-versus-one’ is faster for training and testing, because the
sub-classifier all consist of fewer training data and result in overall fewer support vectors. In case of many classes
’one-versus-all’ is preferable, because ’one-versus-one’ generates a prohibitively large amount of sub-classifiers,
as their number increases to the square of the number of classes.
A special case of classification is Mode = ’novelty-detection’, where the test data is classified only with regard to
membership to the training data, i.e., NumClasses must be set to 1. The separating hyperplane lies around the
training data and thereby implicitly divides the training data from the rejection class. The advantage is that the
rejection class is not defined explicitly, which is difficult to do in certain applications like texture classification.
The resulting support vectors are all lying at the border. With the parameter Nu, the ratio of outliers in the training
data set is specified. Note, that when classifying in the ’novelty-detection’ mode, the class of the training data is
returned with index 1 and the rejection class is returned with index 0. Thus, the first class serves as rejection class.
In contrast, when using the MLP classifier, the last class serves as rejection class by default.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the SVM. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification.
For Preprocessing = ’normalization’, the feature vectors are normalized. In case of a polynomial kernel, the
minimum and maximum value of the training data set is transformed to -1 and +1. In case of the RBF kernel, the
data is normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation
of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and
a standard deviation of 1. The normalization does not change the length of the feature vector. NumComponents
is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors
differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are
measured in different units (e.g., if some of the data are gray value features and some are region features, or
if region features are mixed, e.g., ’circularity’ (unit: scalar) and ’area’ (unit: pixel squared)). The
normalization transformation should be performed in general, because it increases the numerical stability during
training/testing.
For Preprocessing = ’principal_components’, a principal component analysis (PCA) is performed. First, the
feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space)
that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is
0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that
the transformed features that contain the most variation is contained in the first components of the transformed
feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to determine how many of the transformed feature vector components should be
used. Up to NumFeatures components can be selected. The operator get_prep_info_class_svm can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated. Please note that the RBF kernel is very robust against the dimensionality reduction
performed by PCA and should therefore be the first choice when speeding up the classification time.
The transformation specified by Preprocessing = ’canonical_variates’ first normalizes the training vectors
and then decorrelates the training vectors on average over all classes. At the same time, the transformation maxi-

HALCON 24.11.1.0
586 CHAPTER 7 CLASSIFICATION

mally separates the mean values of the individual classes. As for Preprocessing = ’principal_components’,
the transformed components are sorted by information content, and hence transformed components with little infor-
mation content can be omitted. For canonical variates, up to min(NumClasses−1, NumFeatures) components
can be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_svm. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction. The computation of the canonical variates is also called linear discriminant
analysis.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the length of input
data of the SVM is determined by NumComponents, whereas NumFeatures determines the dimensionality of
the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transforma-
tions, the size of the SVM with respect to data length is reduced, leading to shorter training/classification times by
the SVM.
After the SVM has been created with create_class_svm, typically training samples are added to the SVM
by repeatedly calling add_sample_class_svm or read_samples_class_svm. After this, the SVM is
typically trained using train_class_svm. Hereafter, the SVM can be saved using write_class_svm.
Alternatively, the SVM can be used immediately after training to classify data using classify_class_svm.
A comparison of the SVM and the multi-layer perceptron (MLP) (see create_class_mlp) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be preferred in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameters
. NumFeatures (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the SVM.
Default: 10
Suggested values: NumFeatures ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumFeatures >= 1
. KernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The kernel type.
Default: ’rbf’
List of values: KernelType ∈ {’linear’, ’rbf’, ’polynomial_inhomogeneous’, ’polynomial_homogeneous’}
. KernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Additional parameter for the kernel function. In case of RBF kernel the value for γ. For polynomial kernel the
degree
Default: 0.02
Suggested values: KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. Nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Regularization constant of the SVM.
Default: 0.05
Suggested values: Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction: Nu > 0.0 && Nu < 1.0
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes.
Default: 5
Suggested values: NumClasses ∈ {2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction: NumClasses >= 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The mode of the SVM.
Default: ’one-versus-one’
List of values: Mode ∈ {’novelty-detection’, ’one-versus-all’, ’one-versus-one’}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’normalization’
List of values: Preprocessing ∈ {’none’, ’normalization’, ’principal_components’, ’canonical_variates’}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 587

Preprocessing = ’normalization’).
Default: 10
Suggested values: NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumComponents >= 1
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
Example

create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\


'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = ...
add_sample_class_svm (SVMHandle, Data, Class)
endfor
* Train the SVM
train_class_svm (SVMHandle, 0.001, 'default')
* Use the SVM to classify unknown data
for J := 0 to N-1 by 1
* Extract features
* Features = [...]
classify_class_svm (SVMHandle, Features, 1, Class)
endfor

Result
If the parameters are valid the operator create_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
add_sample_class_svm
Alternatives
read_dl_classifier, create_class_mlp, create_class_gmm
See also
clear_class_svm, train_class_svm, classify_class_svm
References
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Module
Foundation

deserialize_class_svm ( : : SerializedItemHandle : SVMHandle )

Deserialize a serialized support vector machine (SVM).

HALCON 24.11.1.0
588 CHAPTER 7 CLASSIFICATION

deserialize_class_svm deserializes a support vector machine (SVM) (including its training samples),
that was serialized by serialize_class_svm (see fwrite_serialized_item for an introduction
of the basic principle of serialization). The serialized support vector machine is defined by the handle
SerializedItemHandle. The deserialized values are stored in an automatically created support vector ma-
chine with the handle SVMHandle.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
Result
If the parameters are valid, the operator deserialize_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_class_svm
Possible Successors
classify_class_svm, create_class_lut_svm
See also
create_class_svm, write_class_svm, serialize_class_svm
Module
Foundation

evaluate_class_svm ( : : SVMHandle, Features : Result )

Evaluate a feature vector by a support vector machine.


evaluate_class_svm calculates for a feature vector provided in Features the Result given a SVM in
SVMHandle. The operator evaluate_class_svm can only be used if the SVM was created in the Mode =
’novelty-detection’. If the feature vector lies in the class, a Result value bigger 1.0 is returned. If the feature
vector lies outside the class boundary, e.g., is an outlier, a value smaller 1.0 is returned.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Result of evaluating the feature vector with the SVM.
Result
If the parameters are valid the operator evaluate_class_svm returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 589

Possible Predecessors
train_class_svm, read_class_svm
See also
create_class_svm
Module
Foundation

get_class_train_data_svm ( : : SVMHandle : ClassTrainDataHandle )

Get the training data of a support vector machine (SVM).


get_class_train_data_svm gets the training data of a support vector machine (SVM) and returns it in
ClassTrainDataHandle.
Parameters

. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle


Handle of a SVM that contains training data.
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data of the classifier.
Result
If the parameters are valid, the operator get_class_train_data_svm returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
add_class_train_data_mlp, add_class_train_data_gmm, add_class_train_data_knn
See also
create_class_train_data
Module
Foundation

get_params_class_svm ( : : SVMHandle : NumFeatures, KernelType,


KernelParam, Nu, NumClasses, Mode, Preprocessing, NumComponents )

Return the parameters of a support vector machine.


get_params_class_svm returns the parameters of a support vector machine (SVM) that were specified when
the SVM was created with create_class_svm. This is particularly useful if the SVM was read from a file with
read_class_svm. The output of get_params_class_svm can, for example, be used to check whether the
feature vectors and, if necessary, the target data to be used with the SVM have the correct lengths. For a description
of the parameters, see create_class_svm.

HALCON 24.11.1.0
590 CHAPTER 7 CLASSIFICATION

Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. NumFeatures (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the SVM.
. KernelType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The kernel type.
. KernelParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Additional parameter for the kernel.
. Nu (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Regularization constant of the SVM.
. NumClasses (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes of the test data.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The mode of the SVM.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Result
If the parameters are valid the operator get_params_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_class_svm, read_class_svm
Possible Successors
add_sample_class_svm, train_class_svm
See also
classify_class_svm
Module
Foundation

get_prep_info_class_svm ( : : SVMHandle,
Preprocessing : InformationCont, CumInformationCont )

Compute the information content of the preprocessed feature vectors of a support vector machine
get_prep_info_class_svm computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 591

get_prep_info_class_svm, a sufficient number of samples must be added to the support vector machine
(SVM) given by SVMHandle by using add_sample_class_svm or read_samples_class_svm.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_svm. The call to get_prep_info_class_svm al-
ready requires the creation of an SVM, and hence the setting of NumComponents in create_class_svm to
an initial value. However, when get_prep_info_class_svm is called, it is typically not known how many
components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-
step approach should typically be used to select NumComponents: In a first step, an SVM with the maximum
number for NumComponents is created (NumFeatures for ’principal_components’ and min(NumClasses−
1, NumFeatures) for ’canonical_variates’). Then, the training samples are added to the SVM and are saved in
a file using write_samples_class_svm. Subsequently, get_prep_info_class_svm is used to deter-
mine the information content of the components, and with this NumComponents. After this, a new SVM with the
desired number of components is created, and the training samples are read with read_samples_class_svm.
Finally, the SVM is trained with train_class_svm.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’principal_components’
List of values: Preprocessing ∈ {’principal_components’, ’canonical_variates’}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Cumulative information content of the transformed feature vectors.
Example

* Create the initial SVM


create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
* Generate and add the training data
for J := 0 to NumData-1 by 1
* Generate training features and classes
* Data = [...]
* Class = ...
add_sample_class_svm (SVMHandle, Data, Class)
endfor
write_samples_class_svm (SVMHandle, 'samples.mtf')
* Compute the information content of the transformed features
get_prep_info_class_svm (SVMHandle, 'principal_components',\
InformationCont, CumInformationCont)
* Determine NumComp by inspecting InformationCont and CumInformationCont
* NumComp = [...]
* Create the actual SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses, \
'one-versus-all', 'principal_components', \
NumComp, SVMHandle)
* Train the SVM
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')
write_class_svm (SVMHandle, 'classifier.svm')

Result

HALCON 24.11.1.0
592 CHAPTER 7 CLASSIFICATION

If the parameters are valid the operator get_prep_info_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
get_prep_info_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
clear_class_svm, create_class_svm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation

get_sample_class_svm ( : : SVMHandle, IndexSample : Features,


Target )

Return a training sample from the training data of a support vector machine.
get_sample_class_svm reads out a training sample from the support vector machine (SVM) given by
SVMHandle that was added with add_sample_class_svm or read_samples_class_svm. The in-
dex of the sample is specified with IndexSample. The index is counted from 0, i.e., IndexSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_svm. The training sample is returned in Features and Target. Features
is a feature vector of length NumFeatures (see create_class_svm), while Target is the index of the
class, ranging between 0 and NumClasses-1 (see add_sample_class_svm).
get_sample_class_svm can, for example, be used to reclassify the training data with
classify_class_svm in order to determine which training samples, if any, are classified incorrectly.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of the stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. Target (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Target vector of the training sample.
Example

* Train an SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')
* Reclassify the training samples

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 593

get_sample_num_class_svm (SVMHandle, NumSamples)


for I := 0 to NumSamples-1 by 1
get_sample_class_svm (SVMHandle, I, Data, Target)
classify_class_svm (SVMHandle, Data, 1, Class)
if (Class != Target)
* Sample has been classified incorrectly
endif
endfor

Result
If the parameters are valid the operator get_sample_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
add_sample_class_svm, read_samples_class_svm, get_sample_num_class_svm,
get_support_vector_class_svm
Possible Successors
classify_class_svm
See also
create_class_svm
Module
Foundation

get_sample_num_class_svm ( : : SVMHandle : NumSamples )

Return the number of training samples stored in the training data of a support vector machine.
get_sample_num_class_svm returns in NumSamples the number of training samples that are stored in
the support vector machine (SVM) given by SVMHandle. get_sample_num_class_svm should be called
before the individual training samples are accessed with get_sample_class_svm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_svm).
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If SVMHandle is valid the operator get_sample_num_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm

HALCON 24.11.1.0
594 CHAPTER 7 CLASSIFICATION

Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation

get_support_vector_class_svm ( : : SVMHandle,
IndexSupportVector : Index )

Return the index of a support vector from a trained support vector machine.
The operator get_support_vector_class_svm maps a support vector of a trained SVM (given
in SVMHandle) to the original training data set. The index of the SV is specified with
IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be a num-
ber between 0 and NumSupportVectors − 1, where NumSupportVectors can be determined with
get_support_vector_num_class_svm. The index of this SV in the training data is returned in Index.
This Index can be used for a query with get_sample_class_svm to obtain the feature vectors that become
support vectors. get_sample_class_svm can, for example, be used to visualize the support vectors.
Note that when using train_class_svm with a mode different from ’default’ or reducing the SVM with
reduce_class_svm, the returned Index will always be -1, i.e., it will be invalid. The reason for this is
that a consistent mapping between SV and training data becomes impossible.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. IndexSupportVector (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the stored support vector.
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Index of the support vector in the training set.
Result
If the parameters are valid the operator get_sample_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_svm, get_support_vector_num_class_svm
Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation

get_support_vector_num_class_svm (
: : SVMHandle : NumSupportVectors, NumSVPerSVM )

Return the number of support vectors of a support vector machine.

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 595

get_support_vector_num_class_svm returns in NumSupportVectors the number of


support vectors that are stored in the support vector machine (SVM) given by SVMHandle.
get_support_vector_num_class_svm should be called before the labels of individual support
vectors are read out with get_support_vector_class_svm, e.g., for visualizing which the training data
become a SV (see get_support_vector_class_svm). The number of SVs in each classifier is listed
in NumSVPerSVM. The reason that its sum differs from the Number obtained in NumSupportVectors is
that SV evaluations are reused throughout different sub-classifiers. NumSVPerSVM provides the possibility for
controlling the process of speeding up SVM classification time with the operator reduce_class_svm.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. NumSupportVectors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Total number of support vectors.
. NumSVPerSVM (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of SV of each sub-SVM.
Result
If SVMHandle is valid the operator get_sample_num_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_svm
Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation

read_class_svm ( : : FileName : SVMHandle )

Read a support vector machine from a file.


read_class_svm reads a support vector machine (SVM) that has been stored with write_class_svm.
Since the training of an SVM can consume a relatively long time, the SVM is typically trained in an offline process
and written to a file with write_class_svm. In the online process the SVM is read with read_class_svm
and subsequently used for classification with classify_class_svm. The default HALCON file extension for
the SVM classifier is ’gsc’.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
File extension: .gsc
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
Result
If the parameters are valid the operator read_class_svm returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information

HALCON 24.11.1.0
596 CHAPTER 7 CLASSIFICATION

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
classify_class_svm, create_class_lut_svm
Alternatives
read_dl_classifier
See also
create_class_svm, write_class_svm
Module
Foundation

read_samples_class_svm ( : : SVMHandle, FileName : )

Read the training data of a support vector machine from a file.


read_samples_class_svm reads training samples from the file given by FileName and adds them to
the training samples that have already been added to the support vector machine (SVM) given by SVMHandle.
The SVM must be created with create_class_svm before calling read_samples_class_svm.
As described with train_class_svm and write_samples_class_svm, the operators
read_samples_class_svm, add_sample_class_svm, and write_samples_class_svm can
be used to build up a extensive set of training samples, and hence to improve the performance of the SVM by
retraining the SVM with extended data sets.
It should be noted that the training samples must have the correct dimensionality. The feature vectors and tar-
get vectors stored in FileName must have the lengths NumFeatures and NumClasses that were specified
with create_class_svm. The target is stored in vector form for compatibility reason with the MLP (see
read_samples_class_mlp). If the dimensions are incorrect an error message is returned.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
Result
If the parameters are valid the operator read_samples_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SVMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_svm
Possible Successors
train_class_svm

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 597

Alternatives
add_sample_class_svm
See also
write_samples_class_svm, clear_samples_class_svm
Module
Foundation

reduce_class_svm ( : : SVMHandle, Method, MinRemainingSV,


MaxError : SVMHandleReduced )

Approximate a trained support vector machine by a reduced support vector machine for faster classification.
As described in create_class_svm, the classification time of a SVM depends on the number of kernel evalu-
ations between the support vectors and the feature vectors. While the length of the data vectors can be reduced in a
preprocessing step like ’principal_components’ or ’canonical_variates’ (see create_class_svm for details),
the number of resulting SV depends on the complexity of the classification problem. The number of SVs is deter-
mined during training. To further reduce classification time, the number of SVs can be reduced by approximating
the original separating hyperplane with fewer SVs than originally required. For this purpose, a copy of the orig-
inal SVM provided by SVMHandle is created and returned in SVMHandleReduced. This new SVM has the
same parametrization as the original SVM, but a different SV expansion. The training samples that are included in
SVMHandle are not copied. The original SVM is not modified by reduce_class_svm.
The reduction method is selected with Method. Currently, only a bottom up approach is supported, which itera-
tively merges SVs. The algorithm stops if either the minimum number of SVs is reached (MinRemainingSV)
or if the accumulated maximum error exceeds the threshold MaxError. Note that the approximation reduces the
complexity of the hyperplane and thereby leads to a deteriorated classification rate. A common approach is there-
fore to start from a small MaxError e.g., 0.001, and to increase its value step by step. To control the reduction
ratio, at each step the number of remaining SVs is determined with get_support_vector_num_class_svm
and the classification rate is checked on a separate test data set with classify_class_svm.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
Original SVM handle.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of postprocessing to reduce number of SV.
Default: ’bottom_up’
List of values: Method ∈ {’bottom_up’}
. MinRemainingSV (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum number of remaining SVs.
Default: 2
Suggested values: MinRemainingSV ∈ {2, 3, 4, 5, 7, 10, 15, 20, 30, 50}
Restriction: MinRemainingSV >= 2
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum allowed error of reduction.
Default: 0.001
Suggested values: MaxError ∈ {0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05}
Restriction: MaxError > 0.0
. SVMHandleReduced (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVMHandle of reduced SVM.
Example

* Train an SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')

HALCON 24.11.1.0
598 CHAPTER 7 CLASSIFICATION

* Create a reduced SVM


reduce_class_svm (SVMHandle, 'bottom_up', 2, 0.01, SVMHandleReduced)
write_class_svm (SVMHandleReduced, 'classifier.svm')

Result
If the parameters are valid the operator train_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
train_class_svm, get_support_vector_num_class_svm
Possible Successors
classify_class_svm, write_class_svm, get_support_vector_num_class_svm
See also
train_class_svm
Module
Foundation

select_feature_set_svm ( : : ClassTrainDataHandle, SelectionMethod,


GenParamName, GenParamValue : SVMHandle, SelectedFeatureIndices,
Score )

Selects an optimal combination of features to classify the provided data.


select_feature_set_svm selects an optimal subset from a set of features to solve a given clas-
sification problem. The classification problem has to be specified with annotated training data in
ClassTrainDataHandle and will be classified by a support vector machine (SVM). Details of the proper-
ties of this classifier can be found in create_class_svm.
The result of the operator is a trained classifier that is returned in SVMHandle. Additionally, the list of indices or
names of the selected features is returned in SelectedFeatureIndices. To use this classifier, calculate for
new input data all features mentioned in SelectedFeatureIndices and pass them to the classifier.
A possible application of this operator can be a comparison of different parameter sets for certain feature extraction
techniques. Another application is to search for a feature that is discriminating between different classes.
Additionally, the values for ’nu’ and ’gamma’ can be estimated for the SVM. To only estimate these two parameters
without altering the feature set, the feature vector has to be specified as one large subfeature.
To define the features that should be selected from ClassTrainDataHandle, the dimensions
of the feature vectors in ClassTrainDataHandle can be grouped into subfeatures by calling
set_feature_lengths_class_train_data. A subfeature can contain several subsequent elements of
a feature vector. The operator decides for each of these subfeatures, if it is better to use it for the classification or
leave it out.
The indices of the selected subfeatures are returned in SelectedFeatureIndices. If names were set
in set_feature_lengths_class_train_data, these names are returned instead of the indices. If
set_feature_lengths_class_train_data was not called for ClassTrainDataHandle before,
each element of the feature vector is considered as a subfeature.
The selection method SelectionMethod is either a greedy search ’greedy’ (iteratively add the feature with
highest gain) or the dynamically oscillating search ’greedy_oscillating’ (add the feature with highest gain and test
then if any of the already added features can be left out without great loss). The method ’greedy’ is generally
preferable, since it is faster. Only in cases when the subfeatures are low-dimensional or redundant, the method
’greedy_oscillating’ should be chosen.

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 599

The optimization criterion is the classification rate of a two-fold cross-validation of the training data. The best
achieved value is returned in Score.
The parameters ’nu’ and ’gamma’ for the SVM that is used to classify can be set to ’auto’ by using the parameters
GenParamName and GenParamValue. If they are set to ’auto’, the estimated optimal ’nu’ and/or ’gamma’
is estimated. The automatic estimation of ’nu’ and ’gamma’ can take a substantial amount of time (up to days,
depending on the data set and the number of features).
Additionally, there is the parameter ’mode’ which can be either set to ’one-versus-all’ or ’one-versus-one’. An
explanation of the two modes as well as of the parameters ’nu’ and ’gamma’ as the kernel parameter of the radial
basis function (RBF) kernel can be found in create_class_svm.
Attention
This operator may take considerable time, depending on the size of the data set in the training file, and the number
of features.
Please note, that this operator should not be called, if only a small set of training data is available. Due to the risk of
overfitting the operator select_feature_set_svm may deliver a classifier with a very high score. However,
the classifier may perform poorly when tested.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. SelectionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to perform the selection.
Default: ’greedy’
List of values: SelectionMethod ∈ {’greedy’, ’greedy_oscillating’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of generic parameters to configure the selection process and the classifier.
Default: []
List of values: GenParamName ∈ {’nu’, ’gamma’, ’mode’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of generic parameters to configure the selection process and the classifier.
Default: []
Suggested values: GenParamValue ∈ {0.02, 0.05, ’auto’, ’one-versus-one’, ’one-versus-all’}
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
A trained SVM classifier using only the selected features.
. SelectedFeatureIndices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
The selected feature set, contains indices.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The achieved score using two-fold cross-validation.
Example

* Find out which of the two features distinguishes two Classes


NameFeature1 := 'Good Feature'
NameFeature2 := 'Bad Feature'
LengthFeature1 := 3
LengthFeature2 := 2
* Create training data
create_class_train_data (LengthFeature1+LengthFeature2,\
ClassTrainDataHandle)
* Define the features which are in the training data
set_feature_lengths_class_train_data (ClassTrainDataHandle, [LengthFeature1,\
LengthFeature2], [NameFeature1, NameFeature2])
* Add training data
* |Feat1| |Feat2|
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 2,1 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 2,1 ], 1)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [1,1,1, 3,4 ], 0)
add_sample_class_train_data (ClassTrainDataHandle, 'row', [2,2,2, 3,4 ], 1)

HALCON 24.11.1.0
600 CHAPTER 7 CLASSIFICATION

* Add more data


* ...
* Select the better feature with a SVM
select_feature_set_svm (ClassTrainDataHandle, 'greedy', [], [], SVMHandle,\
SelectedFeatureSVM, Score)
* Use the classifier
* ...

Result
If the parameters are valid, the operator select_feature_set_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_class_train_data, add_sample_class_train_data,
set_feature_lengths_class_train_data
Possible Successors
classify_class_svm
Alternatives
select_feature_set_mlp, select_feature_set_knn, select_feature_set_gmm
See also
select_feature_set_trainf_svm, gray_features, region_features
Module
Foundation

serialize_class_svm ( : : SVMHandle : SerializedItemHandle )

Serialize a support vector machine (SVM).


serialize_class_svm serializes a support vector machine (SVM) and its stored training samples (see
fwrite_serialized_item for an introduction of the basic principle of serialization). The same data that
is written in a file by write_class_svm and write_samples_class_svm is converted to a serialized
item. The support vector machine is defined by the handle SVMHandle. The serialized support vector machine is
returned by the handle SerializedItemHandle and can be deserialized by deserialize_class_svm.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. SerializedItemHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
Result
If the parameters are valid, the operator serialize_class_svm returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 601

• Processed without parallelization.


Possible Predecessors
train_class_svm
Possible Successors
clear_class_svm, fwrite_serialized_item, send_serialized_item,
deserialize_class_svm
See also
create_class_svm, read_class_svm, write_samples_class_svm,
deserialize_class_svm
Module
Foundation

train_class_svm ( : : SVMHandle, Epsilon, TrainMode : )

Train a support vector machine.


train_class_svm trains the support vector machine (SVM) given in SVMHandle. Before the SVM
can be trained, the training samples to be used for the training must be added to the SVM using
add_sample_class_svm or read_samples_class_svm.
Technically, training an SVM means solving a convex quadratic optimization problem. This implies that it can
be assured that training terminates after finite steps at the global optimum. In order to recognize termination,
the gradient of the function that is optimized internally must fall below a threshold, which is set in Epsilon.
By default, a value of 0.001 should be used for Epsilon since this yields the best results in practice. A too
big value leads to a too early termination and might result in suboptimal solutions. With a too small value the
optimization requires a longer time, often without changing the recognition rate significantly. Nevertheless, if
longer training times are possible, a smaller value than 0.001 might be chosen. There are two common reasons
for changing Epsilon: First, if you specified a very small value for Nu when calling (create_class_svm),
e.g., Nu = 0.001, a smaller Epsilon might significantly improve the recognition rate. A second case is the
determination of the optimal kernel function and its parametrization (e.g., the KernelParam-Nu pair for the RBF
kernel) with the computationally intensive n-fold cross validation. Here, choosing a bigger Epsilon reduces the
computational time without changing the parameters of the optimal kernel that would be obtained when using the
default Epsilon. After the optimal KernelParam-Nu pair is obtained, the final training is conducted with a
small Epsilon.
The duration of the training depends on the training data, in particular on the number of resulting support vectors
(SVs), and Epsilon. It can lie between seconds and several hours. It is therefore recommended to choose the
SVM parameter Nu in create_class_svm so that as few SVs as possible are generated without decreasing
the recognition rate. Special care must be taken with the parameter Nu in create_class_svm so that the
optimization starts from a feasible region. If too many training errors are chosen with a too big Nu, an exception
is raised. In this case, an SVM with the same training data, but with smaller Nu must be trained.
With the parameter TrainMode you can choose between different training modes. Normally, you train an SVM
without additional information and TrainMode is set to ’default’. If multiple SVMs for the same data set but with
different kernels are trained, subsequent training runs can reuse optimization results and thus speedup the overall
training time of all runs. For this mode, in TrainMode a SVM handle of a previously trained SVM is passed.
Note that the SVM handle passed in SVMHandle and the SVMHandle passed in TrainMode must have the
same training data, the same mode and the same number of classes (see create_class_svm). The application
for this training mode is the evaluation of different kernel functions given the same training set. In the literature
this is referred to as alpha seeding.
With TrainMode = ’add_sv_to_train_set’ it is possible to append the support vectors that were generated by a
previous call of train_class_svm to the currently saved training set. This mode has two typical application
areas: First, it is possible to gradually train a SVM. For this, the complete training set is divided into disjunctive
chunks. The first chunk is trained normally using TrainMode = ’default’. Afterwards, the previous training set is
removed with clear_samples_class_svm, the next chunk is added with add_sample_class_svm and
trained with TrainMode = ’add_sv_to_train_set’. This is repeated until all chunks are trained. This approach has
the advantage that even huge training data sets can be trained efficiently with respect to memory consumption. A
second application area for this mode is that a general purpose classifier can be specialized by adding characteristic

HALCON 24.11.1.0
602 CHAPTER 7 CLASSIFICATION

training samples and then retraining it. Please note that the preprocessing (as described in create_class_svm)
is not changed when training with TrainMode = ’add_sv_to_train_set’.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Stop parameter for training.
Default: 0.001
Suggested values: Epsilon ∈ {0.00001, 0.0001, 0.001, 0.01, 0.1}
. TrainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer
Mode of training. For normal operation: ’default’. If SVs already included in the SVM should be used for
training: ’add_sv_to_train_set’. For alpha seeding: the respective SVM handle.
Default: ’default’
List of values: TrainMode ∈ {’default’, ’add_sv_to_train_set’}
Example

* Train an SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')
write_class_svm (SVMHandle, 'classifier.svm')

Result
If the parameters are valid the operator train_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator modifies the state of the following input parameter:
• SVMHandle

During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
classify_class_svm, write_class_svm, create_class_lut_svm
Alternatives
train_dl_classifier_batch, read_class_svm
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


7.6. SUPPORT VECTOR MACHINES 603

write_class_svm ( : : SVMHandle, FileName : )

Write a support vector machine to a file.


write_class_svm writes the support vector machine (SVM) SVMHandle to the file given by FileName.
The default HALCON file extension for the SVM classifier is ’gsc’. write_class_svm is typically called
after the SVM has been trained with train_class_svm. The SVM can be read with read_class_svm.
write_class_svm does not write any training samples that possibly have been stored in the SVM. For this
purpose, write_samples_class_svm should be used.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .gsc
Result
If the parameters are valid the operator write_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_class_svm
Possible Successors
clear_class_svm
See also
create_class_svm, read_class_svm, write_samples_class_svm
Module
Foundation

write_samples_class_svm ( : : SVMHandle, FileName : )

Write the training data of a support vector machine to a file.


write_samples_class_svm writes the training samples currently stored in the support vector machine
(SVM) SVMHandle to the file given by FileName. write_samples_class_svm can be used to build up
a database of training samples, and hence to improve the performance of the SVM by training it with an extended
data set (see train_class_svm). The file FileName is overwritten by write_samples_class_svm.
Nevertheless, extending the database of training samples is easy to do because read_samples_class_svm
and add_sample_class_svm add the training samples to the training samples that are already stored in mem-
ory with the SVM.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
Result
If the parameters are valid the operator write_samples_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

HALCON 24.11.1.0
604 CHAPTER 7 CLASSIFICATION

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Possible Predecessors
add_sample_class_svm
Possible Successors
clear_samples_class_svm
See also
create_class_svm, get_prep_info_class_svm, read_samples_class_svm
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 8

Control

assign ( : : Input : Result )

Assign a new value to a variable.


assign assigns a new value to a variable.
In the full text editor, an assignment is simply entered with the help of the assignment operator ’:=’, e.g.:

u := sin(x) + cos(y)

This is equivalent to the C syntax assignment:

u = sin(x) + cos(y);

If the operator window is used for entering an assignment, assign must be entered into the operator combo box
as an operator name. This opens the parameter area, where the parameter Input represents the expression that
has to be evaluated to one value and assigned to the variable, i.e., this is the right side of the assignment. The
parameter Result gets the name of the variable, i.e., this is the left side of assignment.
Attention
In addition to the parameter type control, which is indicated in the parameter description, assign also supports
iconic variables and vector variables. For an assignment, the parameter types of the two parameters Input and
Result must be identical. For the assignment of iconic objects, the operator copy_obj is used internally.
Parameters
. Input (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
New value.
Default: 1
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Variable that has to be changed.
Example

Tuple1 := [1,0,3,4,5,6,7,8,9]
Val := sin(1.2) + cos(1.2)
Tuple2 := []

Result
If the expression is correct assign returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an error
code returned.
Alternatives
insert
Module
Foundation

605
606 CHAPTER 8 CONTROL

assign_at ( : : Index, Value : Result )

Assignment of one or several values to one or several tuple elements.


assign_at assigns a single value to one or several elements of a tuple, or it assigns a number of values ele-
mentwise to the specified elements of the output tuple. All other elements of the output tuple keep their values.
If the passed indices are out of the current range of the output tuple, the tuple is increased and the new values are
initialized to a default value.
In the full text editor an assign_at operation is simply entered with the help of the assignment operator symbol
’:=’ and the index access operator symbol ’[ ]’ following the output variable. The Index parameter can be any
expression that evaluates to any number of positive integer values. The Value parameter must evaluate to exactly
one value or to the same number of indices that are provided via the Index parameter, e.g.:

Areas[Radius-1] := Area
Areas[0,4,|Rad|-1] := 0
FileNames[0,2,4] := ['f1','f2','f3']

The operator assign_at replaces and extends the modifying version of the old insert operator.
Parameters
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Indices of the elements that have to be replaced by the new value(s).
Default: 0
Suggested values: Index ∈ {0, 1, 2, 3, 4, 5, 6}
Minimum increment: 1
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tuple(-array) ; integer / real / string
Value(s) that is to be assigned.
Default: 1
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tuple(-array) ; real / integer / string
Result tuple containing the assigned values.
Result
If the expression is correct assign_at returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an error
code returned.
Alternatives
assign, tuple_replace
Module
Foundation

break ( : : : )

Terminate loop execution or leave a switch block.


break terminates the smallest enclosing for, while, or repeat..until loop. In addition, the break state-
ment is used to leave a switch block, in particular at the end of a case branch. The program execution is
continued at the program line following the corresponding block.
break statements that are not enclosed by a loop or switch block are invalid.
Example

read_image (Image, 'monkey')


threshold (Image, Region, 160, 180)
connection (Region, Regions)
Number := |Regions|
AllRegionsValid := 1
* check if for all regions area <=30
for i := 1 to Number by 1

HALCON/HDevelop Reference Manual, 2024-11-13


607

select_obj (Regions, ObjectSelected, i)


area_center (ObjectSelected, Area, Row, Column)
if (Area > 30)
AllRegionsValid := 0
break
endif
endfor

Result
break (as an operator) always returns 2 (H_MSG_TRUE).
Alternatives
continue
See also
for, while, repeat, until, switch, case
Module
Foundation

case ( : : Constant : )

Jump label that starts a branch within a switch block.


case defines a jump label within a switch block. It starts a branch that is executed if the value of the control
expression of the switch statement matches the constant integer expression that is defined in Constant. For
this parameter only constant integer expressions are accepted. Variable expressions and other data types are not
allowed.
As in the programming languages C, C++, and C# the case statement does not open a block that is automatically
left at the next case or default statement. In contrast, it works just like a goto label that is accessed if the label
matches. In order to leave a case branch and continue execution after the end of the switch block, the break
statement can be used anywhere within the switch block.
Parameters
. Constant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Constant integer expressions that determines for which value of the switch control expression the branch is
accessed.
Default: 1
Result
case (as an operator) always returns 2 (H_MSG_TRUE).
Alternatives
elseif
See also
switch, default, endswitch, if
Module
Foundation

catch ( : : : Exception )

Catches exceptions that were thrown in the preceding try block.


With the help of the operators try, catch, endtry, and throw it is possible to implement a dynamic exception
handling in HDevelop, which is comparable to the exception handling in C++ and C#. The basic concepts of the
exception handling in HDevelop are described at the operators try, throw, and dev_set_check as well as in
the “HDevelop User’s Guide”.
The operator catch ends a block of watched program lines and starts a block of program lines that have to be
executed in an error case. If the try-catch block is executed without an exception, the catch-endtry block

HALCON 24.11.1.0
608 CHAPTER 8 CONTROL

is ignored and program execution continues after the corresponding endtry operator. In contrast, in an error case
the program execution jumps directly from the operator where the error occurred (or from the throw operator) to
the catch operator of the surrounding try-catch block. The output control parameter Exception returns a
tuple that contains a predefined set of data describing the error in case an operator error occurred. If the exception
was thrown by the throw operator, an arbitrary user-defined tuple can be returned.
The most important data within the Exception tuple is the error code. Therefore, this is passed as the first item
of the Exception tuple and can be accessed directly with Exception[0]. However, all other data has to be
accessed through the operator dev_get_exception_data, because the order and the extent of the provided
data may change in future versions and may vary for different programming language exports. Especially, it has
to be taken into account that in the exported code there are some items of the error tuple that are not available and
others that might not be determined until they are requested (like error messages).
If the exception was thrown by an operator error, a HALCON error code (< 10000) or if the aborted operator
belongs to an extension package, a user-defined error code (> 10000) is returned as the error code. A list of all
HALCON error codes can be found in the appendix of the “Extension Package Programmer’s Manual”. The first
element of a user-defined Exception tuple thrown by the operator throw should be an error code ≥ 30000.
Additional tuple elements can be chosen without any restrictions.
If an operator error occurred within HDevelop or HDevEngine, the following information about the error is pro-
vided by the Exception tuple:

• The HALCON error code.


• An additional HDevelop specific error code that specifies whether an error was caught within the HALCON
operator (code = 21000) or outside the operator, e.g., during the evaluation and assignment of the parameter
expressions. In the latter case the error code specifies the kind of error more precisely.
• The HALCON error message.
• An appropriate HDevelop-specific error message.
• The number of the program line, where the error occurred.
• The name of the operator that threw the exception (if the exception was thrown in a protected procedure,
'--protected--' is returned instead of the operator name).
• The depth of the call stack (if the error occurred in ’main’ a depth of 1 is returned).
• The name of the procedure, where the error occurred.

In most cases, for an automatic exception handling it is sufficient to use the HALCON error code. Additional data
is primarily passed in order to provide some information about the error condition to the developer of the HDevelop
program for debugging reasons. Attention: in the exported code, in general, information about the error location
will not be available.
Attention
The export of the operators try, catch, endtry, and throw is not supported for the language C, but only for
the languages C++, C# and VisualBasic/.NET. Only the latter support throwing exceptions across procedures.
Parameters
. Exception (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . exception-array ; integer / string
Tuple returning the exception data.
Result
catch always returns 2 (H_MSG_TRUE).
Possible Successors
dev_get_exception_data
See also
try, endtry, throw, dev_get_exception_data, dev_set_check
Module
Foundation

comment ( : : Comment : )

Add a comment of one line to the program.

HALCON/HDevelop Reference Manual, 2024-11-13


609

comment allows to add a comment of one line to the program. As parameter value, i.e., as comment, all characters
are allowed. If the operator window is used to enter a comment and if there are newlines in the comment line
parameter, one comment statement for every text line is inserted.
In the full text editor a comment is marked by entering an asterisk (’*’) as first non-whitespace character.
This operator has no effect on the program execution.
Parameters
. Comment (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Arbitrary sequence of characters.
Example

* This is a program with comments


* 'this is a string as comment'
* here are numbers: 4711, 0.815
stop ()

Result
comment is never executed.
Module
Foundation

continue ( : : : )

Skip the current loop execution.


continue skips the smallest enclosing for, while or repeat..until loop execution. Program execution is
continued at the condition line of the loop or at the next line after the continue statement in case no enclosing
loop exists.
Result
continue (as operators) always returns 2 (H_MSG_TRUE).
Alternatives
break
See also
for, while, repeat, until
Module
Foundation

convert_tuple_to_vector_1d ( : : InputTuple,
SubTupleLength : ResultVector )

Distribute the elements of a tuple to a vector.


convert_tuple_to_vector_1d transforms a tuple into a vector variable. The input tuple InputTuple
is split into sub-tuples each comprising of SubTupleLength elements. The sub-tuples are stored in the output
vector ResultVector.
Parameters
. InputTuple (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Input tuple.
. SubTupleLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Desired length of the resulting tuples in the output vector.
Default: 1
. ResultVector (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-vector1 ; real / integer / string
Output vector.

HALCON 24.11.1.0
610 CHAPTER 8 CONTROL

Result
If the values of the specified parameters are correct, convert_tuple_to_vector_1d returns 2
(H_MSG_TRUE). Otherwise, an exception is raised and an error code returned.
See also
convert_vector_to_tuple
Module
Foundation

convert_vector_to_tuple ( : : InputVector : ResultTuple )

Concatenate the elements of a vector to a single tuple.


convert_vector_to_tuple transforms a vector into a tuple. The elements of the input vector
InputVector get concatenated and stored in the output tuple ResultTuple. If InputVector has a di-
mension of 2 or greater its elements are collected in a depth-first search. E.g., the input vector ’1,2,3,4’ will be
turned into the result tuple [1,2,3,4].
Parameters

. InputVector (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-vector ; real / integer / string


Input vector.
. ResultTuple (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Output tuple.
Result
If the values of the specified parameters are correct, convert_vector_to_tuple returns 2 (H_MSG_TRUE).
Otherwise, an exception is raised and an error code returned.
See also
convert_tuple_to_vector_1d
Module
Foundation

default ( : : : )

Alternative branch in a switch block.


default opens an alternative branch in a switch block. This branch is accessed if the calculated control
expression of the switch statement does not match any of the integer constants of the previous case statements.
Result
default (as an operator) always returns 2 (H_MSG_TRUE).
Alternatives
case, elseif, else
See also
switch, case, endswitch, if
Module
Foundation

else ( : : : )

Alternative of conditional statement.


else continues after an if or elseif block with an alternative block. If the conditions of all corresponding
if or elseif blocks evaluated to ’false’ (0), i.e., none of the corresponding if or elseif blocks has been
executed, the following else block is executed.

HALCON/HDevelop Reference Manual, 2024-11-13


611

Result
else (as operator) always returns 2 (H_MSG_TRUE).
Alternatives
if, elseif
See also
until, for, while
Module
Foundation

elseif ( : : Condition : )

Conditional statement with alternative.


elseif is a conditional statement that continues after an if or another elseif block with an alternative block.
The Condition parameter must evaluate to a Boolean or integer expression.
If Condition evaluates to ’true’ (not 0), the following block body up to the next corresponding block state-
ment elseif, else, or endif is executed. Reaching the end of the block the execution continues after the
corresponding endif statement.
If Condition evaluates to ’false’ (0), the execution is continued at the next corresponding block statement
elseif, else, or endif.
Parameters
. Condition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Condition for the if statement.
Default: 1
Result
If the condition is correct elseif (as operator) returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and
an error code returned.
Alternatives
if
See also
else, for, while, until
Module
Foundation

endfor ( : : : )

End statement of a for loop.


endfor is the last statement of a for loop.
Result
endfor always returns 2 (H_MSG_TRUE).
See also
for
Module
Foundation

endif ( : : : )

End of if command.

HALCON 24.11.1.0
612 CHAPTER 8 CONTROL

endif is the last statement of an if, elseif, or else block.


Result
endif always returns 2 (H_MSG_TRUE).
See also
if
Module
Foundation

endswitch ( : : : )

Ends a multiway branch block.


endswitch ends a multiway branch block that has been opened by switch.
Result
endswitch always returns 2 (H_MSG_TRUE).
See also
switch
Module
Foundation

endtry ( : : : )

Ends a block where exceptions are handled.


With the help of the operators try, catch, endtry, and throw it is possible to implement a dynamic exception
handling in HDevelop, which is comparable to the exception handling in C++ and C#. The basic concepts of the
exception handling in HDevelop are described at the operators try, throw, and dev_set_check as well as in
the “HDevelop User’s Guide”.
The operator endtry closes the exception handling block that was opened with the operators try and catch.
Attention
The export of the operators try, catch, endtry, and throw is not supported for the language C, but only for
the languages C++, C# and VisualBasic/.NET. Only the latter support throwing exceptions across procedure.
Result
endtry always returns 2 (H_MSG_TRUE).
See also
try, catch, throw, dev_get_exception_data, dev_set_check
Module
Foundation

endwhile ( : : : )

End statement of a while loop.


endwhile is the last statement of a while loop.
Result
endwhile always returns 2 (H_MSG_TRUE).
See also
while
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


613

executable_expression ( : : Expression : )

Execute a stand-alone operation.


The HDevelop language contains a few operations that are executed stand-alone, i.e., not as an expression within
another operator call. The operator executable_expression allows to enter such stand-alone operations
into the operator window of HDevelop. In the full text editor however, those operations are entered verbatim.
Currently, the following modifying vector operations are stand-alone and can only be used in an executable ex-
pression:

• .clear()
• .insert()
• .remove()

For further details about these operations please refer to the HDevelop User’s Guide.
Even though Expression formally is presented as a control parameter, nonetheless it is also possible to execute
stand-alone operations with iconic vectors.
Parameters
. Expression (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-vector ; real / integer / string
Operation to be executed.
Example

read_image (Image1, 'fin1')


read_image (Image2, 'fin2')
ImageVector.insert(1, Image1).insert(2, Image2)
* process vector
ImageVector.clear()

Result
If the values of the specified parameters are correct, executable_expression returns 2 (H_MSG_TRUE).
Otherwise, an exception is raised and an error code returned.
Module
Foundation

exit ( : : : )

Terminate HDevelop.
exit terminates HDevelop. The operator is equivalent to the menu entry File . Quit. Internally and for
exported C++ code the C-function call exit(0) is used.
Example

read_image (Image, 'fabrik')


intensity (Image, Image, Mean, Deviation)
open_file ('intensity.txt', 'output', FileHandle)
fwrite_string (FileHandle, Mean + ' ' + Deviation)
close_file (FileHandle)
exit ()

Result
exit returns 0 (o.k.) to the calling environment of HDevelop = operating system.
See also
stop
Module
Foundation

HALCON 24.11.1.0
614 CHAPTER 8 CONTROL

export_def ( : : Position, Declaration : )

Insert arbitrary text into the export code of a procedure.


export_def allows to define code lines or text blocks that are written verbatim into the output file of a procedure
or program that is exported.
The parameter Position controls the placement of the text given in Declaration. The following options are
supported:

’in_place’ - # The text is inserted in the procedure at the actual place, i.e., in between the neighboring program
lines.
’at_file_begin’ - #ˆˆ The text is exported at the very beginning of the exported file.
’before_procedure’ - #ˆ The text is exported immediately before the procedure it is defined in.
’after_procedure’ - #$ The text is exported immediately after the procedure it is defined in.
’at_file_end’ - #$$ The text is exported at the very end of the exported file.

In the program listing, export_def is not represented in normal operator syntax but marked by a special char-
acter sequence. The first character within the line is the export marker # that can be followed by a position marker
as listed above. If entering an export definition in the full text editor, please note that there must not be any spaces
before #.
For better readability, the export character sequence may be followed by one space character that is not interpreted
as part of the export text. All additional spaces are added to the export.
For lines that are exported within the current procedure, the export gets the same indentation as the current program
lines get. There is one exception: if the export text starts with # immediately after the export markers or the optional
space, the export text will not be indented at all, e.g.:

for Index := 1 to 5 by 1
\# \#ifdef MY\_SWITCH
\# int cnt = 100;
* an optional code block
\# \#endif
endfor

is exported to:

proc (...)
{
...
for (...)
{
\#ifdef MY\_SWITCH
int cnt = 100;
// an optional block
\#endif
}
...
}

An export definition can be activated and deactivated as any normal operator. Deactivated export definitions are
not exported.
Parameters
. Position (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Place where the export text is written.
List of values: Position ∈ {’in_place’, ’at_file_begin’, ’before_procedure’, ’after_procedure’,
’at_file_end’}

HALCON/HDevelop Reference Manual, 2024-11-13


615

. Declaration (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string


Text that is exported.
Result
export_def is never executed.
See also
comment
Module
Foundation

for ( : : Start, End, Step : Index )

Starts a loop block that is usually executed for a fixed number of iterations.
Syntax in HDevelop: for Index := Start to End by Step
The for statement starts a loop block that is usually executed for a fixed number of iterations. The for block
ends at the corresponding endfor statement.
The number of iterations is defined by the Start value, the End value, and the increment value Step. All
of these parameters can be initialized with expressions or variables instead of constant values. Please note that
these loop parameters are evaluated only once, namely, immediately before the for loop is entered. They are
not re-evaluated after the loop cycles, i.e., any modifications of these variables within the loop body will have no
influence on the number of iterations.
The passed loop parameters must be either of type integer or real. If all input parameters are of type
integer, the Index variable will also be of type integer. In all other cases the Index variable will be
of type real.
At the beginning of each iteration the loop variable Index is compared to the End parameter. If the increment
value Step is positive, the for loop is executed as long as the Index variable is less than or equal to the End
parameter. If the increment value Step is negative, the for loop is executed as long as the Index variable is
greater than or equal to the End parameter.
Attention: If the increment value Step is set to a value of type real, it may happen that the last loop cycle is
omitted owing to rounding errors in case the Index variable is expected to match the End value exactly in the last
cycle. Hence, on some systems the following loop is not executed—as expected—for four times (with the Index
variable set to 1.3, 1.4, 1.5, and 1.6), but only three times because after three additions the index variable is slightly
greater than 1.6 due to rounding errors.

I:=[]
for Index := 1.3 to 1.6 by 0.1
I := [I,Index]
endfor

After the execution of the loop body, i.e., upon reaching the corresponding endfor statement or a continue
statement, the increment value (as initialized at the beginning of the for loop) is added to the current value of
the loop counter Index. Then, the loop condition is re-evaluated as described above. Depending on the result
the loop is either executed again or finished in which case execution continues with the first statement after the
corresponding endfor statement.
A break statement within the loop—that is not covered by a more internal block—leaves the loop immediately
and execution continues after the corresponding endfor statement. In contrast, the continue statement is used
to ignore the rest of the loop body in the current cycle and continue execution with adapting the Index variable
and re-evaluating the loop condition.
Attention: It is recommended to avoid modifying the Index variable of the for loop within its body.
If the for loop is stopped, e.g., by a stop statement or by pressing the Stop button, and if the PC is placed
manually by the user, the for loop is continued at the current iteration as long as the PC remains within the for
body or is set to the endfor statement. If the PC is set on the for statement (or before it) and executed again,
the loop is reinitialized and restarts at the beginning.

HALCON 24.11.1.0
616 CHAPTER 8 CONTROL

Parameters
. Start (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Start value of the loop variable.
Default: 1
. End (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
End value of the loop variable.
Default: 5
. Step (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Increment value of the loop variable.
Default: 1
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Loop variable.
Example

read_image (Image, 'fabrik')


threshold (Image, Region, 128, 255)
connection (Region, ConnectedRegions)
select_shape (ConnectedRegions, SelectedRegions, 'area', 'and', 150, 99999)
area_center (SelectedRegions, Area, Row, Column)
dev_close_window ()
dev_open_window (0, 0, 512, 512, 'black', WindowHandle)
dev_display (Image)
dev_display (SelectedRegions)
dev_set_color ('white')
for Index := 0 to |Area| - 1 by 1
set_tposition (WindowHandle, Row[Index], Column[Index])
write_string (WindowHandle, 'Area=' + Area[Index])
endfor

Result
If the values of the specified parameters are correct, for (as an operator) returns 2 (H_MSG_TRUE). Otherwise,
an exception is raised and an error code is returned.
Alternatives
while, until
See also
repeat, break, continue, endfor
Module
Foundation

global ( : : Declaration : )

Declare a global variable.


The global statement can be used to declare a global variable. By declaring a variable as global the variable
becomes visible to all other procedures that also declare the same variable explicitly as global.
If a variable is not explicitly declared as global inside a procedure, the variable is local within that procedure even
if there is a global variable with the same name.
The parameter Declaration contains the variable declaration that consists of the optional keyword ’def’, the
type ’object’ or ’tuple’, the optional keyword ’vector’ (followed by the desired dimension in round brackets), and
the variable name.
Setting the type to ’object’ an iconic variable is declared, by setting it to ’tuple’ a control variable is declared.
The keyword ’def’ allows to mark one declaration explicitly as the place where the variable is defined. In most
cases this will not be necessary because in HDevelop the variable instance is created as soon as it is declared
somewhere. However, if several procedures are exported to a programming language and if the procedures are

HALCON/HDevelop Reference Manual, 2024-11-13


617

not exported into one output file that contains all procedures together but into separate output files it will become
necessary to mark one of the global variable declarations as the place where the variable is defined. A set of
procedure export files that are linked to one library or application must contain exactly one definition of each
global variable in order to avoid both undefined symbols and multiple definitions.
In the program listing, global variable declarations are displayed and must be entered without parenthesis in order
to emphasize that the line is a declaration and not an executable operator. The syntax is as follows:

global [def] {object|tuple} [vector(<Dimension>)] <Variable Name>

Parameters
. Declaration (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Global variable declaration: optional keyword ’def’, type, and variable name
Suggested values: Declaration ∈ {’object’, ’tuple’, ’def object’, ’def tuple’, ’object vector(1)’, ’tuple
vector(1)’, ’def object vector(1)’, ’def tuple vector(1)’}
Result
global is never executed.
Module
Foundation

if ( : : Condition : )

Conditional statement.
if is a conditional statement that starts an if block. The Condition parameter must evaluate to a Boolean or
integer expression.
If Condition evaluates to ’true’ (not 0), the following block body up to the next corresponding block state-
ment elseif, else, or endif is executed. Reaching the end of the block the execution continues after the
corresponding endif statement.
If Condition evaluates to ’false’ (0), the execution is continued at the next corresponding block statement
elseif, else, or endif.
Parameters
. Condition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Condition for the if statement.
Default: 1
Result
If the condition is correct if (as operator) returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an
error code returned.
Alternatives
elseif, else
See also
for, while, until
Module
Foundation

import ( : : ProcedureSource : )

Import one or more external procedures.


The import statement can be used to import additional external procedures from within a HDevelop program.
The imported procedures become only available for the procedure that contains the import statement but not for
other procedures.
import statements may occur in any line of a procedure. The imported procedures become only available below
the import statement and may be overruled by later import statements.

HALCON 24.11.1.0
618 CHAPTER 8 CONTROL

proc()
* unresolved procedure call

import ./the\_one\_dir
proc()
* resolves to ./the\_one\_dir/proc.hdvp

import ./the\_other\_dir
proc()
* resolves to ./the\_other\_dir/proc.hdvp

The parameter ProcedureSource points to the source of the external procedures. It can either be the path of
a directory that contains the procedures and/or the procedure libraries to be used or directly the file name of a
procedure library. In both cases, the path may either be absolute or relative. In the latter case, HDevelop interprets
the path as being relative to the file location of the procedure that contains the import statement. Thus, the
location of this procedure can be included with ’.’. The path has to be in quotes if it contains one or more spaces,
otherwise the program line will become invalid.
Contrary to system, user-defined, and session directories HDevelop looks only in the directory specified by an
import statement for external procedures but not recursively in its subdirectories.
Note, that an import statement is never executed and, therefore, ProcedureSource has to be evaluated
already at the procedure’s loading time. Therefore, ProcedureSource has to be a constant expression, and, in
particular, it is not possible to pass a string variable to ProcedureSource.
However, ProcedureSource may also contain environment variables, which HDevelop resolves accordingly.
Environment variables, regardless of the platform actually used, must always be denoted in Windows syntax, i.e.,
%VARIABLE%.
import neither tests whether the path ProcedureSource exists nor whether it points to a procedure library
or a directory that contains procedures at all. Therefore, import statements with nonexistent or pointless paths
nonetheless stay valid program lines, in any case.
Import paths are listed separately in HDevelop’s procedure settings. Of course, these paths can’t be modified or
deactivated from within the procedure settings. Furthermore, procedures that are available only via an import
statement are marked with a special icon.
In the program listing, import statements are displayed and must be entered without parenthesis in order to empha-
size that the line is a declaration and not an executable operator.
Parameters
. ProcedureSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
File location of the external procedures to be loaded: either a directory or a procedure library
Result
import is never executed.
Module
Foundation

insert ( : : Input, Value, Index : Result )

Assignment of a value to a tuple element.


insert is obsolete and is only provided for reasons of backward compatibility. The modifying version of
the insert operator was replaced by the operator assign_at. This operator uses the same notation in
the full text editor, so that it is used automatically. The non-modifying version of the insert operator is
replaced by the new operator tuple_replace.
insert assigns a single value to a specific element of a tuple.
In the full text editor an insert operation is simply entered with the help of the assignment operator sign ’:=’ and
the index access operator sign ’[ ]’ for the result variable, e.g.:

Areas[Radius-1] := Area

HALCON/HDevelop Reference Manual, 2024-11-13


619

If the operator window is used for entering the insert operator, insert must be entered into the operator combo
box as the operator name. This opens the parameter area, where the parameter Value represents the expression
that has to be evaluated to one value and assigned to the element at position Index within the tuple Input. The
parameter Result gets the name of the variable where the result has to be stored.
If the input tuple that is passed via the parameter Input and the output tuple that is passed in Result are
identical (and only in that case), the insert operator is listed and can be written in the full text editor in the
above assignment notation. In this case, the input tuple is modified and the correct operator notation for above
assignment would be:

insert (Areas, Area, Radius-1, Areas)

If the Input tuple and the Result tuple differ, the input tuple will not be modified. In this case, within the
program listing only the operator notation can be used:

insert (Areas, Area, Radius-1, Result)

This is the same as:

Result := Areas
Result[Radius-1] := Area

Please note that the operator insert will not increase the tuple if the tuple already stores a value at the passed
index. Instead of that the element at the position Index will be replaced. Hence, for the Value parameter exactly
one single value (or an expression that evaluates to one single value) must be passed.
If the passed Index parameter is beyond the current tuple size, the tuple will be increased to the required size.
The tuple elements that were inserted between the hitherto last element and the new element are undefined.
Parameters
. Input (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Tuple, where the new value has to be inserted.
Default: []
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer / string
Value that has to be inserted.
Default: 1
Value range: 0 ≤ Value ≤ 1000000
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index position for new value.
Default: 0
Suggested values: Index ∈ {0, 1, 2, 3, 4, 5, 6}
Minimum increment: 1
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Result tuple with inserted values.
Result
If the expression is correct insert returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an error
code returned.
Alternatives
assign
Module
Foundation

par_join ( : : ThreadID : )

Wait for subthreads that were started with the par_start qualifier.
The par_join operator is used to wait in the calling procedure for all procedures or operators that have been
started in separate subthreads by adding the par_start qualifier to the according program line. The subthreads
to wait for are identified by their thread ids that are passed to the parameter ThreadID.

HALCON 24.11.1.0
620 CHAPTER 8 CONTROL

Attention: par_start is not an operator but a qualifier that is added at the begin of the program line that has to
be executed in parallel to the calling procedure. The syntax is par_start <ThreadID> : followed by the
actual procedure or operator call.
Parameters
. ThreadID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . thread_id(-array) ; integer
Ids of all subthreads to wait for.
Example

* start two procedures in separate sub threads


par_start <ThreadID1> : producer_proc()
par_start <ThreadID2> : consumer_proc()
* wait until both procedures have finished
par_join ([ThreadID1, ThreadID2])

Result
If the values of the specified parameters are correct, par_join returns 2 (H_MSG_TRUE). Otherwise, an excep-
tion is raised and an error code returned.
Module
Foundation

repeat ( : : : )

Start statement of a repeat..until loop.


repeat is the first statement of an repeat..until loop.
Result
repeat always returns 2 (H_MSG_TRUE).
Alternatives
for, while
See also
until, break, continue
Module
Foundation

return ( : : : )

Terminate procedure call.


return terminates the current procedure call and returns to the calling procedure. Program execution is continued
at the next active program line after the procedure call in the calling procedure. If the current procedure is the main
procedure, program execution is finished and the program counter jumps to the end of the program. Note that
every procedure except the main procedure has to contain at least one reachable return operator line in order to
be able to return from a call to the procedure.
Result
return always returns 2 (H_MSG_TRUE).
Module
Foundation

stop ( : : : )

Stop program execution.

HALCON/HDevelop Reference Manual, 2024-11-13


621

The stop operator stops the continuous program execution of the HDevelop program. If this happens, the PC
remains on the stop statement (instead of being placed at the next executable program line) to show the reason
for the program interruption directly even if numerous comments or other non-executable program lines follow.
The operator is equivalent to the Stop action (F9) in the menu bar. Unless parallel execution is used (via the
par_start qualifier), the program can easily be continued with the Run action (F5). See also “Parallel Execu-
tion” in the HDevelop User’s Guide.
It is possible to redefine the behavior by setting a time parameter in the preferences dialog. In this case, the
execution will not stop but continue after waiting for the specified period of time. Within this period of time, the
program can be interrupted with F9 or continued with one of the run commands. This is marked by an icon in the
first column of the program window.
Attention
This operator is not supported for code export.
Trying to continue a program that uses parallel execution after calling stop may cause non-deterministic thread
behavior or errors.
Example

read_image (Image, 'fabrik')


regiongrowing (Image, Regions, 3, 3, 6, 100)
count_obj (Regions, Number)
dev_update_window ('off')
for i := 1 to Number by 1
select_obj (Regions, RegionSelected, i)
dev_clear_window ()
dev_display (RegionSelected)
stop ()
endfor

Result
If the program stops at a stop statement, the return state of the previous operator is kept. If the program is
continued with the stop operator, stop always returns 2 (H_MSG_TRUE).
See also
exit
Module
Foundation

switch ( : : ControlExpression : )

Starts a multiway branch block.


switch starts a block that allows to control the program flow via a multiway branch. The parameter
ControlExpression must result in an integer value. This value determines to what case label the execution
jumps. Every case statement includes one integer constant. If the integer constant of a case statement is equal
to the calculated value of the parameter ControlExpression, the program execution continues there. In addi-
tion, an optional default statement can be defined as the last jump label within a switch block. The program
execution jumps to this default label, if no case constant matches the calculated ControlExpression
value.
As in the programming languages C, C++, and C#, the case statement is a jump label and—in contrast to
elseif—not the begin of an enclosed block that is automatically left at the next case or default statement.
In order to leave the switch block after the execution of the code lines of a case branch, as in C or C++ a
break statement must be inserted at the end of the case branch. break statements can be used anywhere
within a switch block. This causes the program execution to continue after the closing endswitch statement.
Without a break statement at the end of a branch the program execution “falls through” to the statements of the
following case or default branch.
If the same statements have to be executed in different cases, i.e., for multiple control values, several case state-
ments with different constant expressions can be listed one below the other.

HALCON 24.11.1.0
622 CHAPTER 8 CONTROL

Parameters
. ControlExpression (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Integer expression that determines at which case label the program execution is continued.
Example

TestStr := ''
for Index := 1 to 8 by 1
TestStr := TestStr + '<'
switch (Index)
case 1:
TestStr := TestStr + '1'
break
case 2:
TestStr := TestStr + '2'
* intentionally fall through to 3
case 3:
TestStr := TestStr + '3'
* intentionally fall through to 4
case 4:
TestStr := TestStr + '4'
break
case 5:
case 6:
* common case branch for 5 and 5
TestStr := TestStr + '56'
break
case 7:
* continue for loop
TestStr := TestStr + '7'
continue
default:
TestStr := TestStr + 'd'
break
endswitch
TestStr := TestStr + '>'
endfor

Result
If the condition is correct, switch (as an operator) returns 2 (H_MSG_TRUE). Otherwise, an exception is raised
and an error code is returned.
Alternatives
if, elseif, else
See also
case, default, endswitch, if
Module
Foundation

throw ( : : Exception : )

Throws a user-defined exception or rethrows a caught exception.


With the help of the operators try, catch, endtry, and throw it is possible to implement a dynamic exception
handling in HDevelop, which is comparable to the exception handling in C++ and C#. The basic concepts of the
exception handling in HDevelop are also described at the operators try, and dev_set_check as well as in the
“HDevelop User’s Guide”.

HALCON/HDevelop Reference Manual, 2024-11-13


623

The operator throw provides an opportunity to throw an exception from an arbitrary place in the program. This
exception can be caught by the catch operator of a surrounding try-catch block. By this means the developer
is able to define his own specific error or exception states, for which the normal program execution is aborted in
order to continue with a specific cross-procedure exception handling, e.g., for freeing resources or restarting from
a defined state.
In such a user-defined exception a nearly arbitrary tuple can be thrown as the Exception parameter, merely the
first element of the tuple should be set to a user-defined error code ≥ 30000. If different user-defined exception
states are possible, they can be distinguished using different error codes (≥ 30000) in the first element or by using
additional elements.
In addition, with the help of the operator throw it is possible to rethrow an exception that was caught with the
operator catch. This may be sensible, for instance, if within an inner try-catch-endtry block (e.g., within
an external procedure) only specific exceptions can be handled in an adequate way and all other exceptions must
be passed to the caller, where they can be caught and handled by an outer try-catch-endtry block.
For rethrowing a caught exception, it is possible to pass the Exception tuple that was caught by the catch
operator directly to the Exception parameter of the throw operator. Furthermore, it is possible to append
arbitrary (but no iconic) user data to the Exception tuple, that can be accessed after catching the exception as
’user_data’ with the operator dev_get_exception_data:

try
...
catch(Exception)
...
UserData := ...
throw([Exception, UserData])
endtry

Attention
The export of the operators try, catch, endtry, and throw is not supported for the language C, but only for
the languages C++, C# and VisualBasic/.NET. Only the latter support throwing exceptions across procedures.
Parameters
. Exception (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . exception-array ; integer / string
Tuple returning the exception data or user defined error codes.
Result
If the values of the specified parameters are correct, throw (as operator) returns 2 (H_MSG_TRUE). Otherwise,
an exception is raised and an error code returned.
See also
try, catch, endtry, dev_get_exception_data, dev_set_check
Module
Foundation

try ( : : : )

Starts a program block where exceptions are detected and caught.


With the help of the operators try, catch, endtry, and throw it is possible to implement a dynamic exception
handling in HDevelop, which is comparable to the exception handling in C++ and C#. By the operators try,
catch, and endtry two code blocks are formed: the first one (try .. catch) contains the watched program
lines that perform the normal program logic. The second block (catch .. endtry) contains the code that has is
executed if an exception occurs.
The operator try enables the exception handling for the following program lines, i.e., the following code block
up to the corresponding catch operator is watched for exceptions. If during the execution of the subsequent
program lines an error or another exceptional state occurs, or if an exception is thrown explicitly by the operator
throw, the try block is left immediately (or—depending on a user preference—after displaying an error message
box) and the program execution continues at the corresponding catch operator. If the exception is thrown within

HALCON 24.11.1.0
624 CHAPTER 8 CONTROL

a procedure that was called from the try block (directly or via other procedure calls), the procedure call and
all intermediate procedure calls that are on the call stack above the try block are immediately aborted (or, if
applicable, also after displaying an error message box).
Whether an error message box is displayed before the exception is thrown or not, is controlled by the HDe-
velop preference ’Suppress error message dialogs within try-catch blocks’ that can be
reached via Edit->Preferences->General Options->Experienced User. This message box also
offers the opportunity to stop the program execution before the exception is thrown in order to edit the possibly
erroneous operator call.
The program block that is watched for exceptions ends with the corresponding catch operator. If within the
watched try block no exception occurred, the following catch block is ignored and the program execution
continues after the corresponding endtry operator.
try-catch-endtry blocks can be nested arbitrarily into each other, within a procedure or over different proce-
dure calls, as long as any inner try-catch-endtry block lies completely either within an outer try-catch or
a catch-endtry block. If an exception is thrown within an inner try-catch block, the exception handling is
caught in the corresponding catch-endtry block. Hence, the exception is not visible for the outer try-catch
blocks unless the exception is rethrown explicitly by calling a throw operator from the catch block.
If within a HALCON operator an error occurs, an exception tuple is created and passed to the catch operator
that is responsible for catching the exception. The tuple collects information about the error such as the error code
and the error text. After catching an exception, this information can be accessed with the help of the operator
dev_get_exception_data. For more information about the passed exception data, how to access them, and
considerations about the code export, see the description of that operator. The reference of the operator throw
describes how to throw user-defined exception tuples.
HDevelop offers the opportunity to disable the handling of HALCON errors. This can be achieved by calling
the operator dev_set_check(’~give_error’) or by unchecking the check box Give Error on the dialog
Edit->Preferences->Runtime Settings. If the error handling is switched off, in case of an HALCON
error no exception is thrown but the program execution is continued as normal at the next operator. In contrast
to that, the operator throw will always throw an exception independently of the ’give_error’ setting. The same
applies if an error occurred during the evaluation of an parameter expression.
Attention
The export of the operators try, catch, endtry, and throw is not supported for the language C, but only for
the languages C++, C# and VisualBasic/.NET. Only the latter support throwing exceptions across procedures.
Example

try
read_image (Image, 'may_be_not_available')
catch (Exception)
if (Exception[0] == 5200)
dev_get_exception_data (Exception, 'error_message', ErrMsg)
set_tposition (3600, 24, 12)
write_string (3600, ErrMsg)
return ()
else
* rethrow the exception
throw ([Exception,'unknown exception in myproc'])
endif
endtry

Result
try always returns 2 (H_MSG_TRUE).
Alternatives
dev_set_check
See also
catch, endtry, throw, dev_get_exception_data, dev_set_check
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


625

until ( : : Condition : )

Continue to execute the body as long as the condition is not true.


until ends a repeat..until loop. The repeat..until loop is executed as long as the Condition param-
eter evaluates to ’false’ (0). The body of the loop is executed at least once, because the condition will be checked
at the end of the body.
Parameters
. Condition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Condition for loop.
Result
If the values of the specified parameters are correct, until (as operator) returns 2 (H_MSG_TRUE). Otherwise,
an exception is raised and an error code returned.
Alternatives
for, while
See also
repeat, if, elseif, else, break, continue
Module
Foundation

while ( : : Condition : )

Starts a loop block that is executed as long as the condition is true.


while executes the loop body up to the corresponding endwhile statement as long as the Condition param-
eter evaluates to ’true’ (or a number not equal 0).
If the condition evaluates to ’false’ (0) the program is continued after the corresponding endwhile statement.
Parameters
. Condition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Condition for loop.
Example

dev_update_window ('off')
dev_close_window ()
dev_open_window (0, 0, 512, 512, 'black', WindowID)
read_image (Image, 'particle')
dev_display (Image)
stop ()
threshold (Image, Large, 110, 255)
dilation_circle (Large, LargeDilation, 7.5)
dev_display (Image)
dev_set_draw ('margin')
dev_set_line_width (3)
dev_set_color ('green')
dev_display (LargeDilation)
dev_set_draw ('fill')
stop ()
complement (LargeDilation, NotLarge)
reduce_domain (Image, NotLarge, ParticlesRed)
mean_image (ParticlesRed, Mean, 31, 31)
dyn_threshold (ParticlesRed, Mean, SmallRaw, 3, 'light')
opening_circle (SmallRaw, Small, 2.5)
connection (Small, SmallConnection)
dev_display (Image)

HALCON 24.11.1.0
626 CHAPTER 8 CONTROL

dev_set_colored (12)
dev_display (SmallConnection)
stop ()
dev_set_color ('green')
dev_display (Image)
dev_display (SmallConnection)
Button := 1
while (Button == 1)
dev_set_color ('green')
get_mbutton (WindowID, Row, Column, Button)
dev_display (Image)
dev_display (SmallConnection)
dev_set_color ('red')
select_region_point (SmallConnection, SmallSingle, Row, Column)
dev_display (SmallSingle)
NumSingle := |SmallSingle|
if (NumSingle == 1)
intensity (SmallSingle, Image, MeanGray, DeviationGray)
area_center (SmallSingle, Area, Row, Column)
dev_set_color ('yellow')
set_tposition (WindowID, Row, Column)
write_string (WindowID, 'Area='+Area+', Int='+MeanGray)
endif
endwhile
dev_set_line_width (1)
dev_update_window ('on')

Result
If the values of the specified parameters are correct, while (as operator) returns 2 (H_MSG_TRUE). Otherwise,
an exception is raised and an error code returned.
Alternatives
for, until
See also
repeat, break, continue, if, elseif, else
Module
Foundation

HALCON/HDevelop Reference Manual, 2024-11-13


Chapter 9

Deep Learning

Introduction
The term deep learning (DL) refers to a family of machine learning methods. In HALCON, the following methods
are implemented:

3D Gripping Point Detection: Detect gripping points on objects in a 3D scene. For further information please
see the chapter 3D Matching / 3D Gripping Point Detection.

A possible example for a 3D Gripping Point Detection application: A 3D scene (e.g., an RGB image and
XYZ-images) is analyzed and possible gripping points are suggested.

Anomaly Detection and Global Context Anomaly Detection Assign to each pixel the likelihood that it shows
an unknown feature. For further information please see the chapter Deep Learning / Anomaly Detection and
Global Context Anomaly Detection.

Top: A possible example for anomaly detection: A score is assigned to every pixel of the input image,
indicating how likely it shows an unknown feature, i.e., an anomaly. Bottom: A possible example for
Global Context Anomaly Detection: A score is assigned to every pixel of the input image, indicating how
likely it shows a structural or logical anomaly.

Classification: Classify an image into one class out of a given set of classes. For further information please see
the chapter Deep Learning / Classification.

627
628 CHAPTER 9 DEEP LEARNING

A possible example for classification: The image gets assigned to a class.

A possible example for Out-of-Distribution Detection for classification: The image is assigned to a class
and identified as Out-of-Distribution if applicable.

Deep 3D Matching: Detect objects in a scene and compute their 3D pose. For further information please see the
chapter 3D Matching / Deep 3D Matching.

A possible example for a Deep 3D Matching application: Images from different angles are used to detect an
object. As a result the 3D pose of the object is computed.

Deep Counting: Detect and count objects in images. For further information please see the chapter Matching /
Deep Counting.

Count: 12

A possible example for a Deep Counting application: Objects in an image are counted and the object
quantity is returned.

Deep OCR: Detect and recognize words (not just characters) in an image. For further information please see the
chapter OCR / Deep OCR.

'2'

A possible example for deep-learning-based optical character recognition: Words in an image are detected
and recognized.

Multi-label Classification: An image is assigned all contained classes from a given set of classes. For further
information please see the chapter Deep Learning / Multi-Label Classification.

HALCON/HDevelop Reference Manual, 2024-11-13


629

apple:
lemon:
orange:

A possible example for multi-label classification: All contained classes are assigned to the image.

Object Detection and Instance Segmentation: Detect objects of the given classes and localize them within the
image. Instance segmentation is a special case of object detection, where the model also predicts distin-
guished object instances and additionally assigns for the found instances their region within the image. For
further information please see the chapter Deep Learning / Object Detection and Instance Segmentation.

'lemon'

'apple'
'apple'

'lemon'

'apple'
'apple'

Top: A possible example for object detection: Within the input image three instances are found and
assigned to a class.
Bottom: A possible example for instance segmentation: Every instance gets its individual region marked.

Semantic Segmentation and Edge Extraction: Assign a class to each pixel of an image, but different instances
of a class are not distinguished. A special case of semantic segmentation, where every pixel of the input
image is assigned to one of the two classes ’edge’ and ’background’. For further information please see the
chapter Deep Learning / Semantic Segmentation and Edge Extraction.

apple
lemon
orange
background

edges
background

Top: A possible example for semantic segmentation: Every pixel of the input image is assigned to a class.
Bottom: A possible example for edge extraction: Pixels belonging to specific edges are assigned to the
class ’edge’.

All of the deep learning methods listed above use a network for the assignment task. In HALCON they are
implemented within the general DL model, see Deep Learning / Model. The model is trained by only considering
the input and output, which is also called end-to-end learning. Basically, using images and the information, what is
visible in them, the training algorithm adjusts the model in a way to distinguish the different classes and eventually
also how to find the corresponding objects. For you, it has the nice outcome of no need for manual feature
specification. Instead you have to select and collect appropriate data.
System Requirements and License Information
For Deep learning additional prerequisites apply. Please see the requirements listed in the HALCON
“Installation Guide”, paragraph “Requirements for Deep Learning and Deep-Learning-Based Methods”.

HALCON 24.11.1.0
630 CHAPTER 9 DEEP LEARNING

Note that the required module license depends on the model type used in your application. For a detailed descrip-
tion please refer to the “Installation Guide”, paragraph “Dynamic Modules for Deep-Learning-Based
Applications”.
General Workflow
As the DL methods mentioned above differ in what they do and how they need the data, you need to know which
method is most appropriate for your specific task. Once this is clear, you need to collect a suitable amount of data,
meaning images and the information needed by the method. After that, there is a common general workflow for
all these DL methods:

Prepare the Network and the Data The network needs to be prepared for your task and your data adapted to the
specific network.

• Get a network: Read in a pretrained network or create one.


• The network needs to know which problem it shall solve, i.e., which classes are to be distinguished
and what such samples look like. This is represented by your dataset, i.e., your images with the
corresponding ground truth information.
• The network will impose several requirements on the images (as e.g., the image dimension, gray value
range, ... ). Therefore the images have to be preprocessed so that the network can process them.
• We recommend to split the dataset into three distinct datasets which are used for training, validation,
and testing.

Train the Network and Evaluate the Training Progress Once your network is set up and your data prepared it
is time to train the network for your specific task.

• Set the hyperparameters appropriate to your task and system.


• Optionally specify your data augmentation.
• Start the training and evaluate your network.

Apply and Evaluate the Final Network Your network is trained for your task and ready to be applied. But before
deploying it in the real world you should evaluate how well the network performs on basis of your test
dataset.
Inference Phase When your network is trained and you are satisfied with its performance, you can use it for
inference on new images. Thereby the images need to be preprocessed according to the requirements of the
network (thus, in the same way as for training).

Data
The term ’data’ is used in the context of deep learning as the images and the information, what is in them. This
last information has to be provided in a way the network can understand. Not surprisingly, the different DL
methods have their own requirements concerning what information has to be provided and how. Please see the
corresponding chapters for the specific requirements.
The network further poses requirements on the images regarding the image dimensions, the gray value range, and
the type. The specific values depend on the network itself and can be queried with get_dl_model_param.
Additionally, depending on the method there are also requirements regarding the information as e.g., the bounding
boxes. To fulfill all these requirements, the data may have to be preprocessed, which can be done most conveniently
with the corresponding procedure preprocess_dl_samples.
When you train your network, the network gets adapted to its task. But at one point you will want to evaluate what
the network learned and at an even later point you will want to test the network. Therefore the dataset will be split
into three subsets which should be independent and identically distributed. In simple words, the subsets should
not be connected to each other in any way and each set contains for every class the same distribution of images.
This splitting is conveniently done by the procedure split_dl_dataset. The clearly largest subset will be
used for the retraining. We refer to this dataset as the training dataset. At a certain point the performance of the
network is evaluated to check whether it is beneficial to continue the network optimization. For this validation the
second set of data is used, the validation dataset. Even if the validation dataset is disjoint from the first one, it has
an influence on the network optimization. Therefore to test the possible predictions when the model is deployed in

HALCON/HDevelop Reference Manual, 2024-11-13


631

the real world, the third dataset is used, the test dataset. For a representative network validation or evaluation, the
validation and test dataset should have statistically relevant data, which gives a lower bound on the amount of data
needed.
Note also, that for training the network, you best use representative images, i.e., images like the ones you want
to process later and not only ’perfect’ images, as otherwise the network may have difficulties with non-’perfect’
images.
The Network and the Training Process
In the context of deep learning, the assignments are performed by sending the input image through a network. The
output of the total network consists of a number of predictions. Such predictions are e.g., for a classification task
the confidence for each class, expressing how likely the image shows an instance of this class.
The specific network will vary, especially from one method to another. Some methods like e.g., object detection,
use a subnetwork to generate feature maps (see the explanations given below and in Deep Learning / Object
Detection and Instance Segmentation). Here, we will explain a basic Convolutional Neural Network (CNN). Such
a network consists of a certain number of layers or filters, which are arranged and connected in a specific way.
In general, any layer is a building block performing specific tasks. It can be seen as a container, which receives
input, transforms it according to a function, and returns the output to the next layer. Thereby different functions
are possible for different types of layers. Several possible examples are given in the “Solution Guide on
Classification”. Many layers or filters have weights, parameters which are also called filter weights. These
are the parameters modified during the training of a network. The output of most layers are feature maps. Thereby
the number of feature maps (the depth of the layer output) and their size (width and height) depends on the specific
layer.

apple
lemon
orange

Schema of an extract of a possible classification network. Below we show feature maps corresponding to the
layers, zoomed to a uniform size.

To train a network for a specific task, a loss function is added. There are different loss functions depending on
the task, but they all work according to the following principle. A loss function compares the prediction from
the network with the given information, what it should find in the image (and, if applicable, also where), and
penalizes deviations. Now the filter weights are updated in such a way that the loss function is minimized. Thus,
training the network for the specific tasks, one strives to minimize the loss (an error function) of the network, in the
hope of doing so will also improve the performance measure. In practice, this optimization is done by calculating
the gradient and updating the parameters of the different layers (filter weights) accordingly. This is repeated by
iterating multiple times over the training data.
There are additional parameters that influence the training, but which are not directly learned during the regular
training. These parameters have values set before starting the training. We refer to this last type of parameters as
hyperparameters in order to distinguish them from the network parameters that are optimized during training. See
the section “Setting the Training Parameters: The Hyperparameters”.
To train all filter weights from scratch a lot of resources are needed. Therefore one can take advantage from the
following observation. The first layers detect low level features like edges and curves. The feature map of the
following layers are smaller, but they represent more complex features. For a large network, the low level features
are general enough so the weights of the corresponding layers will not change much among different tasks. This
leads to a technique called transfer learning: One takes an already trained network and retrains it for a specific task,

HALCON 24.11.1.0
632 CHAPTER 9 DEEP LEARNING

benefiting from already quite suitable filter weights for the lower layers. As a result, considerably less resources
are needed. While in general the network should be more reliable when trained on a larger dataset, the amount of
data needed for retraining also depends on the complexity of the task. A basic schema for the workflow of transfer
learning is shown with the aid of classification in the figure below.

'lemon'
'lemon'
'lemon'

'apple'
'apple'
'apple'

'?'
...

...

...

...
...

...
...

...

'lemon' 'apple'
'lemon' 'apple'
0.1 0.9

(1) (2) (3) (4)


Basic schema of transfer learning with the aid of classification. (1) A pretrained network is read. (2) Training phase,
the network gets trained with the training data. (3) The trained model with new capabilities. (4) Inference phase,
the trained network infers on new images.

Setting the Training Parameters: The Hyperparameters


The different DL methods are designed for different tasks and will vary in the way they are built up. They all
have in common that during the training of the model one faces a minimization problem. Training the network or
subnetwork, one strives to minimize an appropriate loss function, see the section “The Network and the Training
Process”. For doing so, there is a set of further parameters which is set before starting the training and not optimized
during the training. We refer to these parameters as hyperparameters. For a DL model, you can set a change
strategy, specifying when and how you want these hyperparameters changed during the training. In this section,
we explain the idea of the different hyperparameters. Note, that certain methods have additional hyperparameters,
you find more information in their respective chapter.
As already mentioned, the loss compares the predictions from the network with the given information about the
content of the image. The loss now penalizes deviations. Training the network means updating the filter weights
in such a way, that the loss has to penalize less, thus the loss result is optimized. To do so, a certain amount of data
is taken from the training dataset. For this subset the gradient of the loss is calculated and the network modified in
updating its filter weights accordingly. Now this is repeated with the next subset of data till the whole training data
is processed. These subsets of the training data are called batches and the size of these subsets, the ’batch_size’,
determines the number of data taken into a batch and as a consequence processed together.
A full iteration over the entire training data is called epoch. It is beneficial to iterate several times over the training
data. The number of iterations is defined by ’epochs’. Thus, ’epochs’ determines how many times the
algorithm loops over the training set.
Some models (e.g., anomaly detection) train utilizing the whole dataset at once. For other models, the dataset is
processed batch-wise and in order to do so, the SGD (stochastic gradient descent algorithm) or Adam (adaptive
moment estimation) can be used. This involves further parameters, which are explained in the following. After
every calculation of the loss gradient the filter weights are updated. For this update, there are two important
hyperparameters: The ’learning_rate’ λ, determining the weight of the gradient on the updated loss function
arguments (the filter weights), and the ’momentum’ µ within the interval [0, 1), specifying the influence of previous
updates. More information can be found in the documentation of train_dl_model_batch. In simple words,
when we update the loss function arguments, we still remember the step we took for the last update. Now, we
take a step in direction of the gradient with a length depending to the learning rate; additionally we repeat the step

HALCON/HDevelop Reference Manual, 2024-11-13


633

we did last times, but this time only µ times as long. A visualization is given in the figure below. A too large
learning rate might result in divergence of the algorithm, a very small learning rate will take unnecessarily many
steps. Therefore, it is customary to start with a larger learning rate and potentially reduce it during training. With
a momentum µ = 0, the momentum method has no influence, so only the gradient determines the update vector.

k-1
v
k
v
k
λg
k-1
μv
k+1 k+1
λg v
k
μv

Sketch of the ’learning_rate’ and the ’momentum’ during an actualization step. The gradient step: the learning
rate λ times the gradient g (λg - dashed lines). The momentum step: the momentum µ times the previous update
vector v (µv - dotted lines). Together, they form the actual step: the update vector v (v - solid lines).

To prevent the neural networks from overfitting (see the part “Risk of Underfitting and Overfitting” below), reg-
ularization can be used. With this technique an extra term is added to the loss function. One possible type of
regularization is weight decay, for details see the documentation of train_dl_model_batch. It works by
penalizing large weights, i.e., pushing the weights towards zero. Simply put, this regularization favors simpler
models that are less likely to fit to noise in the training data and generalize better. It can be set by the hyperpa-
rameter ’weight_prior’. Choosing its value is a trade-off between the model’s ability to generalize, overfitting, and
underfitting. If ’weight_prior’ is too small the model might overfit, if it is too large, the model might loose its
ability to fit the data well because all weights are effectively zero.
With the training data and all the hyperparameters, there are many different aspects that can have an influence on
the outcome of such complex algorithms. To improve the performance of a network, generally the addition of
training data also helps. Please note, whether to gather more data is a good solution always depends also on how
easily one can do so. Usually, a small additional fraction will not noticeably change the total performance.
Supervising the training
The different DL methods have different results. Accordingly they also use different measures to determine ’how
well’ a network performs. When training a network, there are behaviors and pitfalls applying to different models,
which are described here.

Validation During Training When it comes to the validation of the network performance, it is important to note
that this is not a pure optimization problem (see the parts “The Network and the Training Process” and
“Setting the Training Parameters” above).
In order to observe the training progress, it is usually helpful to visualize a validation measure, e.g., for
the training of a classification network, the error over the samples of a batch. As the samples differ, the
difficulty of the assignment task may differ. Thus it may be that the network performs better or worse for the
samples of a given batch than for the samples of another batch. So it is normal that the validation measure
is not changing smoothly over the iterations. But in total it should improve. Adjusting the hyperparameters
’learning_rate’ and ’momentum’ can help to improve the validation measure again. The following figures
show possible scenarios.

HALCON 24.11.1.0
634 CHAPTER 9 DEEP LEARNING

Error Error

Iteration Iteration
(1) (2)
Sketch of an validation measure during training, here using the error from classification as example. (1)
General tendencies for possible outcomes with different ’learning_rate’ values, dark blue: good learning rate,
gray: very high learning rate, light blue: high learning rate, orange: low learning rate. (2) Ideal case with a
learning rate policy to reduce the ’learning_rate’ value after a given number of iterations. In orange: training
error, dark blue: validation error. The arrow marks the iteration, at which the learning rate is decreased.

Risk of Underfitting and Overfitting Underfitting occurs if the model is not able to capture the complexity of
the task. It is directly reflected in the validation measure on the training set which stays high.
Overfitting happens when the network starts to ’memorize’ training data instead of learning how to gener-
alize. This is shown by a validation measure on the training set which stays good or even improves while
the validation measure on the validation set decreases. In such a case, regularization may help. See the
explanations of the hyperparameter ’weight_prior’ in the section “Setting the Training Parameters: The Hy-
perparameters”. Note that a similar phenomenon occurs when the model capacity is too high with respect to
the data.

Error

Iteration

Sketch of a possible overfitting scenario, visible on the generalization gap (indicated with the arrow). The
error from classification serves as an example for a validation measure.

Confusion Matrix A network infers for an instance a top prediction, the class for which the network deduces
the highest affinity. When we know its ground truth class, we can compare the two class affiliations: the
predicted one and the correct one. Thereby, the instance differs between the different types of methods, while
e.g., in classification the instances are images, in semantic segmentation the instances are single pixels.
When more than two classes are distinguished, one can also reduce the comparison into binary problems.
This means, for a given class you just compare if it is the same class (positive) or any other class (negative).
For such binary classification problems the comparison is reduced to the following four possible entities
(whereof not all are applicable for every method):

• True positives (TP: predicted positive, labeled positive),


• true negatives (TN: predicted negative, labeled negative),
• false positives (FP: predicted positive, labeled negative),
• false negatives (FN: predicted negative, labeled positive).

HALCON/HDevelop Reference Manual, 2024-11-13


635

A confusion matrix is a table with such comparisons. This table makes it easy to see how well the network
performs for each class. For every class it lists how many instances have been predicted into which class.
E.g., for a classifier distinguishing the three classes ’apple’, ’peach’, and ’orange’, the confusion matrix
shows how many images with ground truth class affiliation ’apple’ have been classified as ’apple’ and how
many have been classified as ’peach’ or ’orange’. Of course, this is listed for the other classes as well. This
example is shown in the figure below. In HALCON, we represent for each class the instances with this
ground truth label in a column and the instances predicted to belong to this class in a row.

TP FP
FN TN

(1) (2)
An example for a confusion matrices from classification. We see that 68 images of an ’apple’ have been
classified as such (TP), 60 images showing not an ’apple’ have been correctly classified as a ’peach’ (30) or
’pear’ (30) (TN), 0 images show a ’peach’ or a ’pear’ but have been classified as an ’apple’ (FP) and 24
images of an ’apple’ have wrongly been classified as ’peach’ (21) or ’pear’ (3) (FN). (1) A confusion matrix
for all three distinguished classes. It appears as if the network ’confuses’ apples and peaches more than all
other combinations. (2) The confusion matrix of the binary problem to better visualize the ’apple’ class.

Glossary
In the following, we describe the most important terms used in the context of deep learning:

Adam Adam (adaptive moment estimation) is a first-order gradient-based optimization algorithm for stochastic
objective functions, which computes individual adaptive learning rates. In the deep learning methods this
algorithm can be used to minimize the loss function.
anchor Anchors are fixed bounding boxes. They serve as reference boxes, with the aid of which the network
proposes bounding boxes for the objects to be localized.
annotation An annotation is the ground truth information, what a given instance in the data represents, in a way
recognizable for the network. This is e.g., the bounding box and the corresponding label for an instance in
object detection.
anomaly An anomaly means something deviating from the norm, something unknown.
backbone A backbone is a part of a pretrained classification network. Its task is to generate various feature maps,
for what reason the classifying layer has been removed.
batch size - hyperparameter ’batch_size’ The dataset is divided into smaller subsets of data, which are called
batches. The batch size determines the number of images taken into a batch and thus processed simultane-
ously.
bounding box Bounding boxes are rectangular boxes used to define a part within an image and to specify the
localization of an object within an image.
class agnostic Class agnostic means without the knowledge of the different classes.
In HALCON, we use it for reduction of overlapping predicted bounding boxes. This means, for a class
agnostic bounding box suppression the suppression of overlapping instances is done ignoring the knowledge
of classes, thus strongly overlapping instances get suppressed independently of their class.
change strategy A change strategy denotes the strategy, when and how hyperparameters are changed during the
training of a DL model.

HALCON 24.11.1.0
636 CHAPTER 9 DEEP LEARNING

class Classes are discrete categories (e.g., ’apple’, ’peach’, ’pear’) that the network distinguishes. In HALCON,
the class of an instance is given by its appropriate annotation.
classifier In the context of deep learning we refer to the term classifier as follows. The classifier takes an image
as input and returns the inferred confidence values, expressing how likely the image belongs to every distin-
guished class. E.g., the three classes ’apple’, ’peach’, and ’pear’ are distinguished. Now we give an image
of an apple to the classifier. As a result, the confidences ’apple’: 0.92, ’peach’: 0.07, and ’pear’: 0.01 could
be returned.
COCO COCO is an abbreviation for "common objects in context", a large-scale object detection, segmentation,
and captioning dataset. There is a common file format for each of the different annotation types.

confidence Confidence is a number expressing the affinity of an instance to a class. In HALCON the confidence
is the probability, given in the range of [0,1]. Alternative name: score
confusion matrix A confusion matrix is a table which compares the classes predicted by the network (top-1) with
the ground truth class affiliations. It is often used to visualize the performance of the network on a validation
or test set.
Convolutional Neural Networks (CNNs) Convolutional Neural Networks are neural networks used in deep
learning, characterized by the presence of at least one convolutional layer in the network. They are par-
ticularly successful for image classification.
data We use the term data in the context of deep learning for instances to be recognized (e.g., images) and their
appropriate information concerning the predictable characteristics (e.g., the labels in case of classification).
data augmentation Data augmentation is the generation of altered copies of samples within a dataset. This is
done in order to augment the richness of the dataset, e.g., through flipping or rotating.
dataset: training, validation, and test set With dataset we refer to the complete set of data used for a training.
The dataset is split into three, if possible disjoint, subsets:

• The training set contains the data on which the algorithm optimizes the network directly.
• The validation set contains the data to evaluate the network performance during training.
• The test set is used to test possible inferences (predictions), thus to test the performance on data without
any influence on the network optimization.

deep learning The term "deep learning" was originally used to describe the training of neural networks with
multiple hidden layers. Today it is rather used as a generic term for several different concepts in machine
learning. In HALCON, we use the term deep learning for methods using a neural network with multiple
hidden layers.

epoch In the context of deep learning, an epoch is a single training iteration over the entire training data, i.e., over
all batches. Iterations over epochs should not be confused with the iterations over single batches (e.g., within
an epoch).
errors In the context of deep learning, we refer to error when the inferred class of an instance does not match the
real class (e.g., the ground truth label in case of classification). Within HALCON, we use the term error in
deep learning when we refer to the top-1 error.
feature map A feature map is the output of a given layer
feature pyramid A feature pyramid is simply a group of feature maps, whereby every feature map origins from
another level, i.e., it is smaller than its preceding levels

head Heads are subnetworks. For certain architectures they attach on selected pyramid levels. These subnetworks
proceed information from previous parts of the total network in order to generate spatially resolved output,
e.g., for the class predictions. Thereof they generate the output of the total network and therewith constitute
the input of the losses.
hyperparameter Like every machine learning model, CNNs contain many formulas with many parameters. Dur-
ing training the model learns from the data in the sense of optimizing the parameters. However, such models
can have other, additional parameters, which are not directly learned during the regular training. These

HALCON/HDevelop Reference Manual, 2024-11-13


637

parameters have values set before starting the training. We refer to this last type of parameters as hyperpa-
rameters in order to distinguish them from the network parameters that are optimized during training. Or
from another point of view, hyperparameters are solver-specific parameters.
Prominent examples are the initial learning rate or the batch size.

inference phase The inference phase is the stage when a trained network is applied to predict (infer) instances
(which can be the total input image or just a part of it) and eventually their localization. Unlike during the
training phase, the network is not changed anymore in the inference phase.
in-distribution In-distribution refers to data that comes from the same underlying distribution as the data on which
a model was trained. When a model encounters in-distribution data during inference , the data is similar in
terms of its statistical properties, features, and patterns to what the model has seen before during training.
intersection over union The intersection over union (IoU) is a measure to quantify the overlap of two areas. We
can determine the parts common in both areas, the intersection, as well as the united areas, the union. The
IoU is the ratio between the two areas intersection and union.
The application of this concept may differ between the methods.

label Labels are arbitrary strings used to define the class of an image. In HALCON these labels are given by the
image name (eventually followed by a combination of underscore and digits) or by the directory name, e.g.,
’apple_01.png’, ’pear.png’, ’peach/01.png’.
layer and hidden layer A layer is a building block in a neural network, thus performing specific tasks (e.g., con-
volution, pooling, etc., for further details we refer to the “Solution Guide on Classification”).
It can be seen as a container, which receives weighted input, transforms it, and returns the output to the next
layer. Input and output layers are connected to the dataset, i.e., the images or the labels, respectively. All
layers in between are called hidden layers.
learning rate - hyperparameter ’learning_rate’ The learning rate is the weighting, with which the gradient is
considered when updating the arguments of the loss function. In simple words, when we want to optimize a
function, the gradient tells us the direction in which we shall optimize and the learning rate determines how
far along this direction we step.
Alternative names: λ, step size
level The term level is used to denote within a feature pyramid network the whole group of layers, whose feature
maps have the same width and height. Thereby the input image represents level 0.
loss A loss function compares the prediction from the network with the given information, what it should find in
the image (and, if applicable, also where), and penalizes deviations. This loss function is the function we
optimize during the training process to adapt the network to a specific task.
Alternative names: objective function, cost function, utility function

momentum - hyperparameter ’momentum’ The momentum µ ∈ [0, 1) is used for the optimization of the loss
function arguments. When the loss function arguments are updated (after having calculated the gradient), a
fraction µ of the previous update vector (of the past iteration step) is added. This has the effect of damping
oscillations. We refer to the hyperparameter µ as momentum. When µ is set to 0, the momentum method has
no influence. In simple words, when we update the loss function arguments, we still remember the step we
did for the last update. Now we go a step in direction of the gradient with a length according to the learning
rate and additionally we repeat the step we did last time, but this time only µ times as long.
non-maximum suppression In object detection, non-maximum suppression is used to suppress overlapping pre-
dicted bounding boxes. When different instances overlap more than a given threshold value, only the one
with the highest confidence value is kept while the other instances, not having the maximum confidence
value, are suppressed.
Out-of-Distribution Out-of-Distribution refers to data that significantly differs from the data on which a model
was trained. When a model encounters out-of-distribution data during inference, the data’s statistical prop-
erties, features, or patterns are unfamiliar to the model, leading to potential challenges in making accurate
predictions.

HALCON 24.11.1.0
638 CHAPTER 9 DEEP LEARNING

overfitting Overfitting happens when the network starts to ’memorize’ training data instead of learning how to
find general rules for the classification. This becomes visible when the model continues to minimize error on
the training set but the error on the validation set increases. Since most neural networks have a huge amount
of weights, these networks are particularly prone to overfitting.
regularization - hyperparameter ’weight_prior’ Regularization is a technique to prevent neural networks from
overfitting by adding an extra term to the loss function. It works by penalizing large weights, i.e., pushing
the weights towards zero. Simply put, regularization favors simpler models that are less likely to fit to
noise in the training data and generalize better. In HALCON, regularization is controlled via the parameter
’weight_prior’.
Alternative names: regularization parameter, weight decay parameter, λ (note that in HALCON we use λ
for the learning rate and within formulas the symbol α for the regularization parameter).
retraining We define retraining as updating the weights of an already pretrained network, i.e., during retraining
the network learns the specific task.
Alternative names: fine-tuning.
solver The solver optimizes the network by updating the weights in a way to optimize (i.e., minimize) the loss.
stochastic gradient descent (SGD) SGD is an iterative optimization algorithm for differentiable functions. A key
feature of the SGD is to calculate the gradient only based on a single batch containing stochastically sampled
data and not all data. In the deep learning methods this algorithm can be used to calculate the gradient to
optimize (i.e., minimize) the loss function.
top-k error The classifier infers for a given image class confidences of how likely the image belongs to every
distinguished class. Thus, for an image we can sort the predicted classes according to the confidence value
the classifier assigned. The top-k error tells the ratio of predictions where the ground truth class is not
within the k predicted classes with highest probability. In the case of top-1 error, we check if the target label
matches the prediction with the highest probability. In the case of top-3 error, we check if the target label
matches one of the top 3 predictions (the 3 labels getting the highest probability for this image).
Alternative names: top-k score
transfer learning Transfer learning refers to the technique where a network is built upon the knowledge of an
already existing network. In concrete terms this means taking an already (pre)trained network with its
weights and adapt the output layer to the respective application to get your network. In HALCON, we also
see the following retraining step as a part of transfer learning.
underfitting Underfitting occurs when the model over-generalizes. In other words it is not able to describe the
complexity of the task. This is directly reflected in the error on the training set, which does not decrease
significantly.
weights In general weights are the free parameters of the network, which are altered during the training due to the
optimization of the loss. A layer with weights multiplies or adds them with its input values. In contrast to
hyperparameters, weights are optimized and thus changed during the training.

Further Information
Get an introduction to deep learning or learn about datasets for deep learning and many other topics in interactive
online courses at our MVTec Academy .

get_dl_device_param ( : : DLDeviceHandle,
GenParamName : GenParamValue )

Return the parameters of a deep-learning-capable hardware device.


get_dl_device_param returns the parameter values GenParamValue of GenParamName for the
deep-learning-capable hardware device (hereafter referred to as device) DLDeviceHandle. See
query_available_dl_devices for details about deep-learning-capable hardware devices.
Supported values for GenParamName are:

’calibration_precisions’: Specifies the unit data types that can be used for a calibration of a deep learning model.
List of values: ’int8’.

HALCON/HDevelop Reference Manual, 2024-11-13


639

’cast_precisions’: Specifies the unit data types that can be used for a cast of a deep learning model.
When changing the data type the calibration does not require any images.
List of values: ’float32’, ’float16’.
’conversion_supported’: Returns ’true’ if unit data types for either a calibration or a cast of a deep learning model
are available. Returns ’false’ in any other case.
’id’: The ID of the device. Within each inference engine, the IDs of its supported devices are unique. The same
holds for devices supported through HALCON.
’inference_only’: Indicates if the device can only be used to infer deep learning models (’true’) or also supports
training or gradient-based operations (’false’).
’ai_accelerator_interface’: AI Accelerator Interface (AI2 ) on which this unit DLDeviceHandle is executed. In
case the device is directly supported by HALCON, the value ’none’ is returned.
List of values: ’tensorrt’, ’openvino’, ’none’.
’info’: Dictionary containing additional information on the device.
Restriction: Only for devices that are supported via an AI2-interface.
’name’: Name of the device.
’optimize_for_inference_params’: Dictionary with default-defined conversion parameters for a calibration or cast
operation of a deep learning model. The entries can be changed.
In case no parameter applies to the set device, an empty dictionary is returned.
Restriction: Only for devices that are supported via an AI2-interface.
’precisions’: Specifies the data types that the unit supports for the weights and/or activations of a deep-learning-
based model.
List of values: ’float32’, ’float16’, ’int8’.
’settable_device_params’: Dictionary with settable device parameters.
Restriction: Only for devices that are supported via an AI2-interface.
’type’: Type of the device.

Parameters
. DLDeviceHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_device ; handle
Handle of the deep-learning-capable hardware device.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter.
Default: ’type’
List of values: GenParamName ∈ {’calibration_precisions’, ’cast_precisions’, ’conversion_supported’, ’id’,
’ai_accelerator_interface’, ’inference_only’, ’info’, ’name’, ’optimize_for_inference_params’, ’precisions’,
’settable_device_params’, ’type’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string / real / integer
Value of the generic parameter.
Result
If the parameters are valid, the operator get_dl_device_param returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
query_available_dl_devices
Possible Successors
set_dl_model_param
Module
Foundation

HALCON 24.11.1.0
640 CHAPTER 9 DEEP LEARNING

optimize_dl_model_for_inference ( : : DLModelHandle,
DLDeviceHandle, Precision, DLSamples,
GenParam : DLModelHandleConverted, ConversionReport )

Optimize a model for inference on a device via the AI2 -interface.


The operator optimize_dl_model_for_inference optimizes the input model DLModelHandle for in-
ference on the device DLDeviceHandle and returns the optimized model in DLModelHandleConverted.
This operator has two distinct functionalities: Casting the model precision to Precision and calibrating the
model based on the given samples DLSamples. Additionally in either case the model architecture may be opti-
mized for the DLDeviceHandle.
The parameter DLDeviceHandle specifies the deep learning device for which the model is optimized.
Whether the device supports optimization can be determined using get_dl_device_param with ’con-
version_supported’. After a successful execution, optimize_dl_model_for_inference sets the pa-
rameter ’precision_is_converted’ to ’true’ for the output model DLModelHandleConverted. In addi-
tion, the device in DLDeviceHandle is automatically set for the model if it supports the precision set by
the parameter Precision. Whether the device supports the requested precision can be determined using
get_dl_device_param with ’precisions’.
The parameter Precision specifies the precision to which the model should be converted to. By default, mod-
els that are delivered by HALCON have the Precision ’float32’. The following values are supported for
Precision:

• ’float32’
• ’float16’
• ’int8’

The parameter DLSamples specifies the samples on which the calibration is based. As a consequence they should
be representative. It is recommended to provide them from the training split. For most applications 10-20 samples
per class are sufficient to achieve good results.
Note, the samples are not needed for a pure cast operation. In this case, an empty tuple can be passed over for
DLSamples.
The parameter GenParam specifies additional, device specific parameters and their values. Which parame-
ters to set for the given DLDeviceHandle in GenParam and their default values can be queried via the
get_dl_device_param operator with the ’optimize_for_inference_params’ parameter.
Note, certain devices also expect only an empty dictionary.
The parameter ConversionReport returns a report dictionary with information about the conversion.
Attention
This operator can only be used via an AI2 -interface. Furthermore, after optimization only parameters that do not
change the underlying architecture of the model can be set for DLModelHandleConverted.
For set_dl_model_param, this includes the following parameters:

• ’Any’: ’device’, ’meta_data’, ’runtime’


• ’anomaly_detection’: ’standard_deviation_factor’
• ’classification’: ’class_names’, ’ood_threshold’
• ’ocr_detection’: ’min_character_score’, ’min_link_score’, ’min_word_score’, ’orientation’, ’sort_by_line’,
’tiling’, ’tiling_overlap’
• ’ocr_recognition’: ’alphabet’, ’alphabet_internal’, ’alphabet_mapping’
• ’gc_anomaly_detection’: ’anomaly_score_tolerance’
• ’detection’: ’class_names’, ’max_num_detections’, ’max_overlap’, ’max_overlap_class_agnostic’,
’min_confidence’
• ’segmentation’: ’class_names’

For set_deep_ocr_param, this includes the following parameters:

• ’device’, ’runtime’

HALCON/HDevelop Reference Manual, 2024-11-13


641

• ’detection_min_character_score’, ’detection_min_link_score’, ’detection_min_word_score’,


• ’detection_orientation’, ’detection_sort_by_line’,
• ’detection_tiling’, ’detection_tiling_overlap’
• ’recognition_alphabet’, ’recognition_alphabet_internal’, ’recognition_alphabet_mapping’

For set_deep_counting_model_param, this includes the following parameters:

• ’device’
• ’max_overlap’, ’min_score’

Only the AI2 -interface that was used to optimize can be set using ’device’ or the ’runtime’. Additional restrictions
may apply to these parameters to ensure that the underlying architecture of the model does not change.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Input model.
. DLDeviceHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_device(-array) ; handle
Device handle used for optimization.
. Precision (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Precision the model shall be converted to.
. DLSamples (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict-array ; handle
Samples required for optimization.
. GenParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Parameter dict for optimization.
. DLModelHandleConverted (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Output model with new precision.
. ConversionReport (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Output report for conversion.
Result
If the parameters are valid, the operator optimize_dl_model_for_inference returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
train_dl_model_batch, query_available_dl_devices
Possible Successors
set_dl_model_param, apply_dl_model
Module
Foundation. This operator uses dynamic licensing (see the ’Installation Guide’). Which of the following modules
is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Matching, Deep Learning Enhanced, Deep Learning Professional

query_available_dl_devices ( : : GenParamName,
GenParamValue : DLDeviceHandles )

Get list of deep-learning-capable hardware devices.


query_available_dl_devices returns a list of handles. Each handle refers to a deep-learning-capable
hardware device (hereafter referred to as device) that can be used for inference or training of a deep learning
model. For each returned device, every parameter mentioned in GenParamName must be equal to at least one

HALCON 24.11.1.0
642 CHAPTER 9 DEEP LEARNING

of its corresponding values that appear in GenParamValue. A parameter can have more than one value by
duplicating its name in GenParamName and adding different corresponding value in GenParamValue.
A deep-learning-capable device is either supported directly through HALCON or through an AI2 -interface.
The devices that are supported directly through HALCON are equivalent to those that can be set to a deep learning
model via set_dl_model_param using ’runtime’ = ’cpu’ or ’runtime’ = ’gpu’. HALCON provides an internal
implementation for the inference or training of a deep learning model for those devices. See Deep Learning for
more details.
Devices that are supported through the AI2 -interface can also be set to a deep learning model using
set_dl_model_param. In this case the inference is not executed by HALCON but by the device itself.
query_available_dl_devices returns a handle for each deep-learning-capable device supported through
HALCON and through an inference engine.
If a device is supported through HALCON and one or several inference engines,
query_available_dl_devices returns a handle for HALCON and for each inference engine.
GenParamName can be used to filter for the devices. All GenParamName that are gettable by
get_dl_device_param and that do not return a handle-typed value for GenParamValue are supported for
filtering. See the operator reference of get_dl_device_param for the list of gettable parameters. In addition,
the following values are supported:

’runtime’: The devices, which are directly supported by HALCON for this device type.
List of values: ’cpu’, ’gpu’.

GenParamName can have multiple entries for the same value. In this case filter combines the entries with a
logical ’or’. Please see the example code below for some examples how to use the filter.
Parameters
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of the generic parameter.
Default: []
List of values: GenParamName ∈ {’calibration_precisions’, ’cast_precisions’, ’conversion_supported’, ’id’,
’ai_accelerator_interface’, ’inference_only’, ’name’, ’optimize_for_inference_params’, ’precisions’,
’runtime’, ’settable_device_params’, ’type’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Value of the generic parameter.
Default: []
Suggested values: GenParamValue ∈ {}
. DLDeviceHandles (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_device(-array) ; handle
Tuple of DLDevice handles
Example

* Query all deep-learning-capable hardware devices


query_available_dl_devices ([], [], DLDeviceHandles)

* Query all GPUs with ID 0 or 2


query_available_dl_devices (['type', 'id', 'id'], ['gpu', 0, 2],\
DLDeviceHandles)

* Query the unique GPU with ID 1 supported by HALCON


query_available_dl_devices (['runtime', 'id'], ['gpu', 1], DLDeviceHandles)

Result
If the parameters are valid, the operator query_available_dl_devices returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON/HDevelop Reference Manual, 2024-11-13


9.1. ANOMALY DETECTION AND GLOBAL CONTEXT ANOMALY DETECTION 643

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
get_dl_device_param
Module
Foundation

set_dl_device_param ( : : DLDeviceHandle, GenParamName,


GenParamValue : )

Set the parameters of a deep-learning-capable hardware device.


set_dl_device_param sets the parameter values GenParamValue for GenParamName for the
deep-learning-capable hardware device (hereafter referred to as device) DLDeviceHandle. See
query_available_dl_devices for details about deep-learning-capable hardware devices.
Parameters

. DLDeviceHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_device ; handle


Handle of the deep-learning-capable hardware device.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of the generic parameter.
Default: []
List of values: GenParamName ∈ {’settable_device_params’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . attribute.name(-array) ; string / real / integer / handle
Value of the generic parameter.
Result
If the parameters are valid, the operator set_dl_device_param returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
query_available_dl_devices
Possible Successors
set_dl_model_param
See also
get_dl_device_param
Module
Foundation

9.1 Anomaly Detection and Global Context Anomaly Detection

This chapter explains how to use anomaly detection and Global Context Anomaly Detection based on deep learn-
ing.
With those two methods we want to detect whether or not an image contains anomalies. An anomaly means
something deviating from the norm, something unknown.

HALCON 24.11.1.0
644 CHAPTER 9 DEEP LEARNING

An anomaly detection or Global Context Anomaly Detection model learns common features of images without
anomalies. The trained model will infer, how likely an input image contains only learned features or if the image
contains something different. Latter one is interpreted as an anomaly. This inference result is returned as a gray
value image. The pixel values therein indicate how likely the corresponding pixels in the input image pixels show
an anomaly.
We differentiate between two model types that can be used:

Anomaly Detection With anomaly detection (model type ’anomaly_detection’) structural anomalies are targeted,
thus any feature that was not learned during training. This can, e.g., include scratches, cracks or contamina-
tion.

A possible example for anomaly detection: Every pixel of the input image gets assigned a value that
indicates how likely the pixel is to be an anomaly. The worm is not part of the worm-free apples the model
has seen during training and therefore its pixels get a much higher score.

Global Context Anomaly Detection Global Context Anomaly Detection (model type ’gc_anomaly_detection’)
comprises two tasks:

• Detecting structural anomalies


As described for anomaly detection above, structural anomalies primarily include unknown features,
like scratches, cracks or contamination.
• Detecting logical anomalies
Logical anomalies are detected if constraints regarding the image content are violated. This can, e.g.,
include a wrong number or wrong position of objects in an image.

A possible example for Global Context Anomaly Detection: Every pixel of the input image gets assigned a
value that indicates how likely the pixel is to be an anomaly. Thereby two different types of anomalies can
be detected, structural and logical ones. Structural anomaly: One apple contains a worm, which differs
from the apples the model has seen during training. Logical anomaly: One apple is sorted among lemons.
Although the apple itself is intact, the logical constraint is violated, as the model has only seen images with
correctly sorted fruit during training.

The Global Context Anomaly Detection model consists of two subnetworks. The model can be reduced
to one of the subnetworks, in order to improve the runtime and memory consumption. This is rec-
ommended if a single subnetwork performs well enough. See the parameter ’gc_anomaly_networks’ in
get_dl_model_param for details. After setting ’gc_anomaly_networks’, the model needs to be eval-
uated again, since this parameter can change the Global Context Anomaly Detection performance signifi-
cantly.

• Local subnetwork
This subnetwork is used to detect anomalies that affect the image on a smaller, local scale. It is
designed to detect structural anomalies but can find logical anomalies as well. Thus, if an anomaly can
be recognized by analyzing single patches of an image, it is detected by the local component of the
model. See the description of the parameter ’patch_size’ in get_dl_model_param for information
on how to define the local scale of this subnetwork.

HALCON/HDevelop Reference Manual, 2024-11-13


9.1. ANOMALY DETECTION AND GLOBAL CONTEXT ANOMALY DETECTION 645

• Global subnetwork
This subnetwork is used to detect anomalies that affect the image on a large, or global scale. It is
designed to detect logical anomalies but can find structural anomalies as well. Thus, if you need to see
most or all of the image to recognize an anomaly, it is detected by the global component of the model.

Training image of an exemplary task. Apples and lemons are intact, sorted correctly, and tagged with the
correct sticker.

(1) (2)

(3) (4)
Some anomalies that can be detected with Global Context Anomaly Detection: (1) Logical anomaly, most
likely detected by the local subnetwork (wrong sticker). (2) Structural anomaly, most likely detected by local
subnetwork (wormy apple). (3) Logical anomaly, most likely detected by global subnetwork (wrong sorting).
(4) Logical anomaly, most likely detected by global subnetwork (missing apples).

General Workflow
In this paragraph, we describe the general workflow for an anomaly detection or Global Context Anomaly Detec-
tion task based on deep learning.

Preprocess the data This part is about how to preprocess your data.

1. The information content of your dataset needs to be converted. This is done by the procedure
• read_dl_dataset_anomaly.
It creates a dictionary DLDataset which serves as a database and stores all necessary information
about your data. For more information about the data and the way it is transferred, see the section
“Data” below and the chapter Deep Learning / Model.
2. Split the dataset represented by the dictionary DLDataset. This can be done using the procedure
• split_dl_dataset.
3. The network imposes several requirements on the images. These requirements (for example the image
size and gray value range) can be retrieved with

HALCON 24.11.1.0
646 CHAPTER 9 DEEP LEARNING

• get_dl_model_param.
For this you need to read the model first by using
• read_dl_model.
4. Now you can preprocess your dataset. For this, you can use the procedure
• preprocess_dl_dataset.
In case of custom preprocessing, this procedure offers guidance on the implementation.
To use this procedure, specify the preprocessing parameters as, e.g., the image size. Store all the pa-
rameter with their values in a dictionary DLPreprocessParam, for which you can use the procedure
• create_dl_preprocess_param.
We recommend to save this dictionary DLPreprocessParam in order to have access to the prepro-
cessing parameter values later during the inference phase.

Training of the model This part explains how to train a model.

1. Set the training parameters and store them in the dictionary TrainParam. This can be done using the
procedure
• create_dl_train_param.
2. Train the model. This can be done using the procedure
• train_dl_model.
The procedure
• adapts models of type ’gc_anomaly_detection’ to the image statistics of the dataset calling the
procedure normalize_dl_gc_anomaly_features,
• calls the corresponding training operator train_dl_model_anomaly_dataset
(’anomaly_detection’) or train_dl_model_batch (’gc_anomaly_detection’), respec-
tively.
The procedure expects:
• the model handle DLModelHandle
• the dictionary DLDataset containing the data information
• the dictionary TrainParam containing the training parameters
3. Normalize the network. This step is only necessary when using a Global Context Anomaly Detection
model. The anomaly scores need to be normalized by applying the procedure
• normalize_dl_gc_anomaly_scores.
This needs to be done in order to get reasonable results when applying a threshold on the anomaly
scores later (see section “Specific Parameters” below).

Evaluation of the trained model In this part, we evaluate the trained model.

1. Set the model parameters which may influence the evaluation.


2. The evaluation can be done conveniently using the procedure
• evaluate_dl_model.
This procedure expects a dictionary GenParam with the evaluation parameters.
3. The dictionary EvaluationResult holds the desired evaluation measures.

Inference on new images This part covers the application of an anomaly detection or Global Context Anomaly
Detection model. For a trained model, perform the following steps:

1. Request the requirements the model imposes on the images using the operator
• get_dl_model_param
or the procedure
• create_dl_preprocess_param_from_model.

HALCON/HDevelop Reference Manual, 2024-11-13


9.1. ANOMALY DETECTION AND GLOBAL CONTEXT ANOMALY DETECTION 647

2. Set the model parameter described in the section “Model Parameters” below, using the operator
• set_dl_model_param.
3. Generate a data dictionary DLSample for each image. This can be done using the procedure
• gen_dl_samples_from_images.
4. Every image has to be preprocessed the same way as for the training. For this, you can use the proce-
dure
• preprocess_dl_samples.
When you saved the dictionary DLPreprocessParam during the preprocessing step, you can di-
rectly use it as input to specify all parameter values.
5. Apply the model using the operator
• apply_dl_model.
6. Retrieve the results from the dictionary DLResult.

Data
We distinguish between data used for training, evaluation, and inference on new images.
As a basic concept, the model handles data by dictionaries, meaning it receives the input data from a dictionary
DLSample and returns a dictionary DLResult and DLTrainResult, respectively. More information on the
data handling can be found in the chapter Deep Learning / Model.

Classes In anomaly detection and Global Context Anomaly Detection there are exactly two classes:

• ’ok’, meaning without anomaly, class ID 0.


• ’nok’, meaning with anomaly, class ID 1 (on pixel values with an ID >0, see the subsection “Data for
evaluation” below).

These classes apply to the whole image as well as single pixels.


Data for training This dataset consists only of images without anomalies and the corresponding information.
They have to be provided in a way the model can process them. Concerning the image requirements, find
more information in the section “Images” below.
The training data is used to train a model for your specific task. With the aid of this data the model can learn
which features the images without anomalies have in common.
Data for evaluation This dataset should include images without anomalies but it can also contain images with
anomalies. Every image within this set needs a ground truth label image_label specifying the class of
the image (see the section above). This indicates if the image shows an anomaly (’nok’) or not (’ok’).
Evaluating the model performance on finding anomalies can visually also be done on pixel level if an image
anomaly_file_name is included in the DLSample dictionary. In this image anomaly_file_name
every pixel indicates the class ID, thus if the corresponding pixel in the input image shows an anomaly (pixel
value > 0) or not (pixel value equal to 0).

(1) (2)

HALCON 24.11.1.0
648 CHAPTER 9 DEEP LEARNING

Scheme of anomaly_file_name. For visibility, gray values are used to represent numbers. (1) Input
image. (2) The corresponding anomaly_file_name providing the class annotations, 0: ’ok’ (white and light
gray), 2: ’nok’ (dark gray).

Images The model poses requirements on the images, such as the dimensions, the gray value range, and the
type. The specific values depend on the model itself. See the documentation of read_dl_model for the
specific values of different models. For a read model they can be queried with get_dl_model_param.
In order to fulfill these requirements, you may have to preprocess your images. Standard preprocessing of
an entire sample, including the image, is implemented in preprocess_dl_samples. In case of custom
preprocessing these procedure offers guidance on the implementation.
Model output The training output differs depending on the used model type:

• Anomaly detection: As training output, the operator train_dl_model_anomaly_dataset will


return a dictionary DLTrainResult with the best obtained error received during training and the
epoch in which this error was achieved.
• Global Context Anomaly Detection: As training output, the operator train_dl_model_batch
will return a dictionary DLTrainResult with the current value of the total loss as well as values for
all other losses included in your model.

As inference and evaluation output, the model will return a dictionary DLResult for every sample. For
anomaly detection and Global Context Anomaly Detection, this dictionary includes the following extra
entries:

• anomaly_score: A score indicating how likely the entire image is to contain an anomaly. This
score is based on the pixel scores given in anomaly_image.
For Global Context Anomaly Detection, depending on the used subnetworks, the anomaly
score can also be calculated by the local (anomaly_score_local) and the global
(anomaly_score_global) subnetwork only. The anomaly_score is by default equal to the
maximum of anomaly_image. The parameter anomaly_score_tolerance can be used to
ignore a fraction of outliers in the anomaly_image when calculating the anomaly_score.
• anomaly_image: An image, where the value of each pixel indicates how likely its corresponding
pixel in the input image shows an anomaly (see the illustration below). For anomaly detection the
values are ∈ [0, 1], whereas there are no constraints for Global Context Anomaly Detection. Depending
on the used subnetworks, when using Global Context Anomaly Detection, an anomaly image can also
be calculated by the local (anomaly_image_local) or the global (anomaly_image_global)
subnetwork only.

(1) (2)
Scheme of anomaly_image. For visualization purpose, gray values are used to represent numbers. (1) The
anomaly_file_name providing the class annotations, 0: ’ok’ (white and light gray), 2: ’nok’ (dark gray) (2)
The corresponding anomaly_image.

Specific Parameters
For an anomaly detection or Global Context Anomaly Detection model, the model parameters as well as the
hyperparameters are set using set_dl_model_param. The model parameters are explained in more detail in

HALCON/HDevelop Reference Manual, 2024-11-13


9.1. ANOMALY DETECTION AND GLOBAL CONTEXT ANOMALY DETECTION 649

get_dl_model_param. As the training for an anomaly detection model is done utilizing the full dataset at
once and not batch-wise, certain parameters as e.g., ’batch_size_multiplier’ have no influence.
The model returns scores but classifies neither pixel nor image as showing an anomaly or not. For this classification,
thresholds need to be given, setting the minimum score for a pixel or image to be regarded as anomalous. You
can estimate possible thresholds using the procedure compute_dl_anomaly_thresholds. Applying these
thresholds can be done with the procedure threshold_dl_anomaly_results. As results the procedure
adds the following (threshold depending) entries into the dictionary DLResult of a sample:

anomaly_class
The predicted class of the entire image (for the given threshold). For Global Context Anomaly De-
tection, depending on the used subnetworks, the anomaly class can also be calculated by the local
(anomaly_class_local) and the global (anomaly_class_global) subnetwork only.
anomaly_class_id
ID of the predicted class of the entire image (for the given threshold). For Global Context Anomaly De-
tection, depending on the used subnetworks, the anomaly class ID can also be calculated by the local
(anomaly_class_id_local) and the global (anomaly_class_id_global) subnetwork only.
anomaly_region
Region consisting of all the pixels that are regarded as showing an anomaly (for the given threshold,
see the illustration below). For Global Context Anomaly Detection, depending on the used subnetworks,
the anomaly region can also be calculated by the local (anomaly_region_local) and the global
(anomaly_region_global) subnetwork only.

(1) (2)
Scheme of anomaly_region. For visualization purpose, gray values are used to represent numbers. (1)
The anomaly_image with the obtained pixel scores. (2) The corresponding anomaly_region.

Domain Handling During Inference


A restriction of the search area can be done by reducing the domain of the input images (e.g., reduce_domain).
The way preprocess_dl_samples handles the domain is set using the preprocessing parameter
’domain_handling’. The parameter ’domain_handling’ should be used in a way that only essential
information is passed on to the network for inference. For instance, use ’keep_domain’ to exclude unwanted
anomalies in the background when computing the anomaly score and image.
The following images show how an input image with reduced domain is inferred after the preprocessing step
depending on the set ’domain_handling’.

Input image for inference with domain (blue).

HALCON 24.11.1.0
650 CHAPTER 9 DEEP LEARNING

(1) (2)
(1) anomaly_image after inference with ’full_domain’ (result: ’nok’), (2) anomaly_image after inference
with ’keep_domain’ (result: ’ok’).

train_dl_model_anomaly_dataset ( : : DLModelHandle, DLSamples,


DLTrainParam : DLTrainResult )

Train a deep learning model for anomaly detection.


The operator train_dl_model_anomaly_dataset performs the training of a deep learning
model with ’type’=’anomaly_detection’ contained in DLModelHandle (for deep learning models with
’type’=’gc_anomaly_detection’ see train_dl_model_batch).
This operator processes the full training dataset at once. This is in contrast to the operator
train_dl_model_batch. The iterations over the dataset are performed internally by the operator. Conse-
quently, you only need to call this operator once with the full training dataset to train your anomaly detection
model.
The training dataset is handed over in the tuple of dictionaries DLSamples. See the chapter Deep Learning /
Model for further information to the used dictionaries and their keys. The operator expects within the training
dataset only images without anomaly to train the anomaly detection model.
The dictionary DLTrainParam can be used to change the hyperparameters. The following values are supported:

• max_num_epochs: This parameter specifies the maximum number of epochs performed during training. In
case the criterion specified by error_threshold is reached in an earlier epoch, the training will terminate
regardless.
Restriction: max_num_epochs >=1.
Default: max_num_epochs = 30.
• error_threshold: This parameter is a termination criterion for the training. If the training error is less
than the specified error_threshold, the training terminates successfully.
Restriction:
0.0 <= error_threshold <= 1.0.
Default: error_threshold = 0.001.
• domain_ratio: This parameter determines the percentage of information of each image used for training.
Since images tend to contain an abundance of information, it is advisable to reduce its amount. Additionally,
reducing domain_ratio can decrease the time needed for training. Please note, however, sufficient infor-
mation needs to remain and therefore this value should not be set too small either. Otherwise the training
result might not be satisfactory or the training itself might even fail.
Restriction: 0.0 < domain_ratio <= 1.0.
Default: domain_ratio = 0.1.
• regularization_noise: This parameter can be set to regularize the training in order to improve ro-
bustness.
Restriction: regularization_noise >=0.0.
Default: regularization_noise = 0.0.

The output dictionary DLTrainResult contains the following values:

• final_error: The best error received during training.


• final_epoch The epoch in which the error final_error was achieved.

HALCON/HDevelop Reference Manual, 2024-11-13


9.2. CLASSIFICATION 651

Attention
The operator train_dl_model_anomaly_dataset internally calls functions that might not be determin-
istic. Therefore, results from multiple calls of train_dl_model_anomaly_dataset can slightly differ,
although the same input values have been used.
System requirements: To run this operator on GPU by setting ’runtime’ to ’gpu’ (see get_dl_model_param),
cuDNN and cuBLAS are required. For further details, please refer to the “Installation Guide”, paragraph
“Requirements for Deep Learning and Deep-Learning-Based Methods”. Alternatively, this operator can also be
run on CPU by setting ’runtime’ to ’cpu’.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Deep learning model handle.
. DLSamples (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict-array ; handle
Tuple of Dictionaries with input images and corresponding information.
. DLTrainParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Parameter for training the anomaly detection model.
Default: []
. DLTrainResult (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Dictionary with the train result data.
Result
If the parameters are valid, the operator train_dl_model_anomaly_dataset returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Automatically parallelized on internal data level.
Possible Predecessors
read_dl_model, set_dl_model_param, get_dl_model_param
Possible Successors
apply_dl_model
See also
apply_dl_model
Module
Foundation. This operator uses dynamic licensing (see the ’Installation Guide’). Which of the following modules
is required depends on the specific usage of the operator:
Deep Learning Professional

9.2 Classification
This chapter explains how to use classification based on deep learning, both for the training and inference phases.
Classification based on deep learning is a method, in which an image gets a set of confidence values assigned.
These confidence values indicate how likely the image belongs to each of the distinguished classes. Thus, if we
regard only the top prediction, classification means to assign a specific class out of a given set of classes to an
image. This is illustrated in the following schema.

A possible classification example, in which the network distinguishes three classes. The input image gets
confidence values assigned for each of the three distinguished classes: ’apple’ 0.85, ’lemon’ 0.03, and ’orange’
0.12. The top prediction tells us, the image is recognized as ’apple’.

HALCON 24.11.1.0
652 CHAPTER 9 DEEP LEARNING

Out-of-Distribution Detection for classification is a method for identifying inputs which differ significantly from
the classes the model was trained on. It is crucial for ensuring model safety and robustness. Out-of-Distribution
Detection helps to filter potentially problematic cases for further review. This is illustrated in the following schema.

A possible example of classification with the addition of Out-of-Distribution Detection. The object in the
inference image differs significantly from the data used to train the network. In addition to the confidence values
for the three classes to be distinguished (’apple’ 0.65, ’lemon’ 0.22, and ’orange’ 0.13), the network also indicates
that the image does not belong to any of the three trained classes (Out-of-Distribution).

In order to do your specific task, thus to classify your data into the classes you want to have distinguished, the
classifier has to be trained accordingly. In HALCON, we use a technique called transfer learning (see also the
chapter Deep Learning). Hence, we provide pretrained networks, representing classifiers which have been trained
on huge amounts of labeled image data. These classifiers have been trained and tested to perform well on industrial
image classification tasks. One of these classifiers, already trained for general classifications, is now retrained for
your specific task. For this, the classifier needs to know, which classes are to be distinguished and how such
examples look like. This is represented by your dataset, i.e., your images with the corresponding ground truth
labels. More information on the data requirements can be found in the section “Data”.
In HALCON, classification with deep learning is implemented within the more general deep learning model. For
more information to the latter one, see the chapter Deep Learning / Model. For the specific system requirements in
order to apply deep learning, please refer to the HALCON “Installation Guide”.
The following sections are introductions to the general workflow needed for classification, information related to
the involved data and parameters, and explanations to the evaluation measures.
General Workflow
In this paragraph, we describe the general workflow for a classification task based on deep learning. It is subdivided
into the four parts preprocessing of the data, training of the model, evaluation of the trained model, and inference
on new images. Thereby we assume, your dataset is already labeled, see also the section “Data” below. Have a
look at the HDevelop example series classify_pill_defects_deep_learning for an application.

Preprocess the data This part is about how to preprocess your data. The single steps are also shown in the
HDevelop example classify_pill_defects_deep_learning_1_preprocess.hdev.

1. The information what is to be found in which image of your training dataset needs to be transferred.
This is done by the procedure
• read_dl_dataset_classification.
Thereby a dictionary DLDataset is created, which serves as a database and stores all necessary
information about your data. For more information about the data and the way it is transferred, see the
section “Data” below and the chapter Deep Learning / Model.
2. Split the dataset represented by the dictionary DLDataset. This can be done using the procedure
• split_dl_dataset.
The resulting split will be saved over the key split in each sample entry of DLDataset.
3. Read in a pretrained network using the operator
• read_dl_model.
This operator is likewise used when you want to read your own trained networks, after you saved them
with write_dl_model.
The network will impose several requirements on the images, as the image dimensions and the gray
value range. The default values are listed in read_dl_model. These are the values with which the
networks have been pretrained. The network architectures allow different image dimensions, which can
be set with set_dl_model_param, but depending on the network a change may make a retraining
necessary. The actually set values can be retrieved with

HALCON/HDevelop Reference Manual, 2024-11-13


9.2. CLASSIFICATION 653

• get_dl_model_param.
4. Now you can preprocess your dataset. For this, you can use the procedure
• preprocess_dl_dataset.
In case of custom preprocessing, this procedure offers guidance on the implementation.
To use this procedure, specify the preprocessing parameters as e.g., the image size. Store all the param-
eters with their values in a dictionary DLPreprocessParam, wherefore you can use the procedure
• create_dl_preprocess_param.
We recommend to save this dictionary DLPreprocessParam in order to have access to the prepro-
cessing parameter values later during the inference phase.

Training of the model This part is about how to train a classifier. The single steps are also shown in the HDevelop
example classify_pill_defects_deep_learning_2_train.hdev.

1. Set the training parameters and store them in the dictionary TrainParam. These parameters include:
• the hyperparameters, for an overview see the chapter Deep Learning.
• parameters for possible data augmentation (optional).
• parameters for the evaluation during training.
• parameters for the visualization of training results.
• parameters for serialization.
This can be done using the procedure
• create_dl_train_param.
2. Train the model. This can be done using the procedure
• train_dl_model.
The procedure expects:
• the model handle DLModelHandle
• the dictionary with the data information DLDataset
• the dictionary with the training parameter TrainParam
• the information, over how many epochs the training shall run.
In case the procedure train_dl_model is used, the total loss as well as optional evaluation mea-
sures are visualized.

Evaluation of the trained model In this part we evaluate the trained classifier. The single steps are also shown in
the HDevelop example classify_pill_defects_deep_learning_3_evaluate.hdev.

1. The evaluation can conveniently be done using the procedure


• evaluate_dl_model.
2. The dictionary EvaluationResult holds the asked evaluation measures. You can visualize your
evaluation results using the procedure
• dev_display_classification_evaluation.
3. A heatmap can be generated for specified samples using
(a) the operator gen_dl_model_heatmap
(b) the procedure gen_dl_model_classification_heatmap

Fit model to Out-of-Distribution Detection (optional) In this part, we extend the trained classifier so it
can detect out-of-distribution data. The single steps are also shown in the HDevelop example
detect_out_of_distribution_samples_for_classification.hdev.

1. Fitting the Out-of-Distribution Detection using


• fit_dl_out_of_distribution.
2. Optional step: Add out-of-distribution data to the dataset for evaluation, using the procedure
• add_dl_out_of_distribution_data.

HALCON 24.11.1.0
654 CHAPTER 9 DEEP LEARNING

3. Rerun the evaluation using the procedure


• evaluate_dl_model.
4. The dictionary EvaluationResult holds the asked evaluation measures. You can visualize your
evaluation results using the procedure
• dev_display_classification_evaluation.

Inference on new images This part covers the application of a deep-learning-based clas-
sification model. The single steps are also shown in the HDevelop example
classify_pill_defects_deep_learning_4_infer.hdev.

1. Set the parameters as e.g., ’batch_size’ using the operator


• set_dl_model_param.
2. Generate a data dictionary DLSample for each image. This can be done using the procedure
• gen_dl_samples_from_images.
3. Preprocess the images as done for the training. We recommend to do this using the procedure
• preprocess_dl_samples.
When you saved the dictionary DLPreprocessParam during the preprocessing step, you can di-
rectly use it as input to specify all parameter values.
4. Apply the model using the operator
• apply_dl_model.
5. Retrieve the results from the dictionary ’DLResultBatch’.

Data
We distinguish between data used for training and data for inference. Latter one consists of bare images. But
for the former one you already know to which class the images belong and provide this information over the
corresponding labels.
As a basic concept, the model handles data over dictionaries, meaning it receives the input data over a dictionary
DLSample and returns a dictionary DLResult and DLTrainResult, respectively. More information on the
data handling can be found in the chapter Deep Learning / Model.

Data for training and evaluation The dataset consists of images and corresponding information. They have to be
provided in a way the model can process them. Concerning the image requirements, find more information
in the section “Images” below.
The training data is used to train and evaluate a network for your specific task. With the aid of this data the
classifier can learn which classes are to be distinguished and how their representatives look like. In classi-
fication, the image is classified as a whole. Therefore, the training data consists of images and their ground
truth labels, thus the class you say this image belongs to. Note that the images should be as representative as
possible for your task. There are different ways possible, how to store and retrieve this information. How the
data has to be formatted in HALCON for a DL model is explained in the chapter Deep Learning / Model. In
short, a dictionary DLDataset serves as a database for the information needed by the training and evalua-
tion procedures. The procedure read_dl_dataset_classification supports the following sources
of the ground truth label for an image:

• The last directory name containing the image


• The file name.

For training a classifier, we use a technique called transfer learning (see the chapter Deep Learning). For this,
you need less resources, but still a suitable set of data. While in general the network should be more reliable
when trained on a larger dataset, the amount of data needed for training also depends on the complexity of
the task. You also want enough training data to split it into three subsets, used for training, validation, and
testing the network. These subsets are preferably independent and identically distributed, see the section
“Data” in the chapter Deep Learning.

HALCON/HDevelop Reference Manual, 2024-11-13


9.2. CLASSIFICATION 655

Images Regardless of the application, the network poses requirements on the images regarding e.g.,
the image dimensions. The specific values depend on the network itself and can be queried
with get_dl_model_param. In order to fulfill these requirements, you may have to prepro-
cess your images. Standard preprocessing is implemented in preprocess_dl_dataset and in
preprocess_dl_samples for a single sample, respectively. In case of custom preprocessing these
procedures offer guidance on the implementation.
Network output The network output depends on the task:

training As output, the operator will return a dictionary DLTrainResult with the current value of the
total loss as well as values for all other losses included in your model.
inference and evaluation As output, the network will return a dictionary DLResult for every sample. For
classification, this dictionary will include for each input image a tuple with the confidence values for
every class to be distinguished in decreasing order and a second tuple with the corresponding class IDs.

Interpreting the Classification Results


When we classify an image, we obtain a set of confidence values, telling us the affinity of the image to every class.
It is also possible to compute the following values.

Confusion Matrix, Precision, Recall, and F-score In classification whole images are classified. As a conse-
quence, the instances of a confusion matrix are images. See the chapter Deep Learning for explanations
on confusion matrices.
You can generate a confusion matrix with the aid of the procedures gen_confusion_matrix and
gen_interactive_confusion_matrix. Thereby, the interactive procedure gives you the possibility
to select examples of a specific category, but it does not work with exported code.
From such a confusion matrix you can derive various values. The precision is the proportion of all correct
predicted positives to all predicted positives (true and false ones). Thus, it is a measure of how many positive
predictions really belong to the selected class.

TP
precision =
TP + FP

The recall, also called the "true positive rate", is the proportion of all correct predicted positives to all real
positives. Thus, it is a measure of how many samples belonging to the selected class were predicted correctly
as positives.

TP
recall =
TP + FN

A classifier with high recall but low precision finds most members of positives (thus members of the class),
but at the cost of also classifying many negatives as member of the class. A classifier with high precision but
low recall is just the opposite, classifying only few samples as positives, but most of these predictions are
correct. An ideal classifier with high precision and high recall will classify many samples as positive with a
high accuracy.
To represent this with a single number, we compute the F1-score, the harmonic mean of precision and recall.
Thus, it is a measure of the classifier’s accuracy.

precision ∗ recall
F1-score = 2 ∗
precision + recall

For the example from the confusion matrix shown in Deep Learning we get for the class ’ap-
ple’ the values precision: 1.00 (= 68/(68+0+0)), recall: 0.74 (= 68/(68+21+3)), and F1-score: 0.85
(=2*(1.00*0.74)/(1.00+0.74)).

HALCON 24.11.1.0
656 CHAPTER 9 DEEP LEARNING

fit_dl_out_of_distribution ( : : DLModelHandle, DLDataset,


GenParam : )

Extend a deep learning model for Out-of-Distribution Detection.


fit_dl_out_of_distribution extends a trained deep learning model DLModelHandle of ’type’ = ’clas-
sification’ for Out-of-Distribution Detection. This functionality allows the model to detect samples that differ
significantly from the classes it was trained on, known as “Out-of-Distribution” (OOD) samples.
When apply_dl_model is called subsequently, the results will include the following additional entries related
to Out-of-Distribution Detection:

’ood_result’: Indicates whether the sample is predicted as out-of-distribution.


’ood_score’: Indicates how much the sample differs from the trained classes. The higher this score, the more
likely it is that the sample is out-of-distribution.
’ood_threshold’: If ’ood_score’ exceeds this threshold, the sample is predicted as out-of-distribution. The out-of-
distribution threshold is computed during the execution of fit_dl_out_of_distribution and stored
within the model handle DLModelHandle as ’ood_threshold’. If required, the threshold can be
adjusted manually using the operator set_dl_model_param.

For fit_dl_out_of_distribution to work properly, it is important that DLDataset is the same dataset
with the same split and preprocessing parameters, as the one used for training DLModelHandle. It is crucial
that the provided dataset DLDataset contains diverse and sufficient samples for each class to ensure reliable
Out-of-Distribution Detection. If the dataset is too small or lacks variation, fit_dl_out_of_distribution
may return an error. In such cases, additional training data should be added to the dataset.
fit_dl_out_of_distribution can be applied to any classification model supported by HALCON. For
models created using Deep Learning / Framework operators or read from an ONNX model file, Out-of-Distribution
Detection compatibility may vary depending on the architecture.
The performance of the model for Out-of-Distribution Detection can be evaluated using the procedure
evaluate_dl_model. To evaluate the model on out-of-distribution data, these can be added to the
DLDataset using the procedure add_dl_out_of_distribution_data, allowing for testing whether the
model can accurately separate in-distribution from out-of-distribution data. Adjustments to the ’ood_threshold’
will affect evaluation results. Therefore, it is recommended to re-evaluate the model after making such changes.
GenParam is a dictionary for setting generic parameters. Currently no generic parameters are supported.
Attention
If fit_dl_out_of_distribution is called for a model that has already been extended with Out-of-
Distribution Detection, the previous internal calculations are discarded and the model is adapted anew
Certain modifications to the model, such as changing the number of classes or continuing training of the model,
cannot be performed once the model has been extended for Out-of-Distribution Detection. To make such changes
possible, the model internal Out-of-Distribution Detection must first be removed from the model using the pa-
rameter ’clear_ood’ in set_dl_model_param. Once removed, fit_dl_out_of_distribution can be
called again to re-enable Out-of-Distribution Detection.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of a deep learning classification model.
. DLDataset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Dataset, which was used for training the model.
. GenParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Dictionary for generic parameters.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 657

Possible Predecessors
read_dl_model
Module
Deep Learning Professional

9.3 Framework

create_dl_layer_activation ( : : DLLayerInput, LayerName,


ActivationType, GenParamName, GenParamValue : DLLayerActivation )

Create an activation layer.


The operator create_dl_layer_activation creates an activation layer whose handle is returned in
DLLayerActivation.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter ActivationType sets the type of the activation. Supported activation types are:

’relu’: Rectified linear unit (ReLU) activation. By setting a specific ReLU parameter, another type can be specified
instead of the standard ReLU:
• Standard ReLU, defined as follows:

ReLU (x) := max(0, x)

• Bounded ReLU, defined as follows:


 0 if x ≤ 0,
ReLU (x) := x if 0 < x ≤ β,
β else.

Setting the generic parameter ’upper_bound’ will result in a bounded ReLU and determines the value of
β.
• Leaky ReLU:, defined as follows:


αx if x ≤ 0,
ReLU (x) :=
x else.

Setting the generic parameter ’leaky_relu_alpha’ results in a leaky ReLU and determines the value α.
’sigmoid’: Sigmoid activation, which is defined as follows.

1
Sigmoid(x) :=
1 + e−x

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’upper_bound’: Float value defining an upper bound for a rectified linear unit. If the activation layer is part of
a model which has been created using create_dl_model, the upper bound can be unset. To do so, use
set_dl_model_layer_param and set an empty tuple for ’upper_bound’.
Default: []

HALCON 24.11.1.0
658 CHAPTER 9 DEEP LEARNING

’leaky_relu_alpha’: Float value defining the alpha parameter of a leaky ReLU.


Restriction: The value of ’leaky_relu_alpha’ must be positive or zero.
Default: 0.0

Certain parameters of layers created using this operator create_dl_layer_activation can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’activation_type’ (ActivationType) x x
’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’output_layer’ (DLLayerActivation) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’leaky_relu_alpha’ x x
’num_trainable_params’ x
’upper_bound’ x x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ActivationType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Activation type.
Default: ’relu’
List of values: ActivationType ∈ {’relu’, ’sigmoid’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’, ’upper_bound’, ’leaky_relu_alpha’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerActivation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Activation layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 659

create_dl_layer_batch_normalization ( : : DLLayerInput,
LayerName, Momentum, Epsilon, Activation, GenParamName,
GenParamValue : DLLayerBatchNorm )

Create a batch normalization layer.


The operator create_dl_layer_batch_normalization creates a batch normalization layer whose han-
dle is returned in DLLayerBatchNorm. Batch normalization is used to improve the performance and stability of
a neural network during training. The mean and variance of each input activation are calculated for each batch and
the input values are transformed to have zero mean and unit variance. Moreover, a linear scale and shift transfor-
mation is learned. During training, to take all samples into account, the batch-wise calculated mean and variance
values are combined with a Momentum into running mean and running variance, where i denotes the iteration
index:

running_mean(i) = (1 − Momentum) ∗ mean(i) + Momentum ∗ running_mean(i − 1),


running_variance(i) = (1 − Momentum) ∗ variance(i) + Momentum ∗ running_variance(i − 1).

To affect the mean and variance values you can set the following options for Momentum:

Given number: For example: 0.9. This is the default and recommended option.
Restriction: 0 ≤ Momentum < 1
’auto’: Combines mean and variance values by a cumulative moving average. This is only recommended in case
the parameters of all previous layers in the network are frozen, i.e., have a learning rate of 0.
’freeze’: Stops the adjustment of the mean and variance and their values stay fixed. In this case, the mean and vari-
ance are used during training for normalizing a batch, analogously to how the batch normalization operates
during inference. The parameters of the linear scale and shift transformation, however, remain learnable.

Epsilon is a small offset to the variance and used to control the numerical stability. Usually its default value
should be adequate.
The parameter DLLayerInput determines the feeding input layer.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Activation determines whether an activation is performed after the batch normalization in order
to optimize the runtime performance.

• ’relu’: perform a ReLU activation after the batch normalization.


It is possible to specify an upper bound to the ReLU operation (see create_dl_layer_activation)
via the generic parameter ’upper_bound’.
• ’none’: no activation operation is performed.

It is not possible to specify a leaky ReLU or a sigmoid activation function. Use


create_dl_layer_activation instead.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’bias_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its val-
ues.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’const’
’bias_filler_const_val’: Constant value.
Restriction: ’bias_filler’ must be set to ’const’.
Default: 0
’bias_filler_variance_norm’: See create_dl_layer_convolution for a detailed explanation of this pa-
rameter and its values.
List of values: ’norm_out’, ’norm_in’, ’norm_average’, or constant value (in combination with ’bias_filler’
= ’msra’).
Default: ’norm_out’

HALCON 24.11.1.0
660 CHAPTER 9 DEEP LEARNING

’bias_term’: Determines whether the created batch normalization layer has a bias term (’true’) or not (’false’).
Default: ’true’
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’learning_rate_multiplier’: Multiplier for the learning rate for this layer that is used during training. If ’learn-
ing_rate_multiplier’ is set to 0.0, the layer is skipped during training.
Default: 1.0
’learning_rate_multiplier_bias’: Multiplier for the learning rate of the bias term. The total bias learning rate is
the product of ’learning_rate_multiplier_bias’ and ’learning_rate_multiplier’.
Default: 1.0
’upper_bound’: Float value defining an upper bound for a rectified linear unit. If the activation layer is part of a
model, which has been created using create_dl_model, the upper bound can be unset. To do so, use
set_dl_model_layer_param and set an empty tuple for ’upper_bound’.
Default: []
’weight_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its
values.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’const’
’weight_filler_const_val’: See create_dl_layer_convolution for a detailed explanation of this parame-
ter and its values.
Default: 1.0
’weight_filler_variance_norm’: See create_dl_layer_convolution for a detailed explanation of this pa-
rameter and its values.
List of values: ’norm_in’, ’norm_out’, ’norm_average’, or constant value (in combination with
’weight_filler’ = ’msra’).
Default: ’norm_in’

Certain parameters of layers created using this operator create_dl_layer_batch_normalization


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’activation_mode’ (Activation) x
’epsilon’ (Epsilon) x
’input_layer’ (DLLayerInput) x
’momentum’ (Momentum) x x
’name’ (LayerName) x x
’output_layer’ (DLLayerBatchNorm) x
’shape’ x
’type’ x

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 661

Generic Layer Parameters set get


’bias_filler’ x x
’bias_filler_const_val’ x x
’bias_filler_variance_norm’ x x
’bias_term’ x
’is_inference_output’ x x
’learning_rate_multiplier’ x x
’learning_rate_multiplier_bias’ x x
’num_trainable_params’ x
’upper_bound’ x x
’weight_filler’ x x
’weight_filler_const_val’ x x
’weight_filler_variance_norm’ x x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Momentum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string / real
Momentum.
Default: 0.9
List of values: Momentum ∈ {0.9, 0.99, 0.999, ’auto’, ’freeze’}
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Variance offset.
Default: 0.0001
. Activation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Optional activation function.
Default: ’none’
List of values: Activation ∈ {’none’, ’relu’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’bias_filler’, ’bias_filler_variance_norm’, ’bias_filler_const_val’,
’bias_term’, ’is_inference_output’, ’learning_rate_multiplier’, ’learning_rate_multiplier_bias’,
’upper_bound’, ’weight_filler’, ’weight_filler_variance_norm’, ’weight_filler_const_val’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’xavier’, ’msra’, ’const’, ’nearest_neighbor’, ’bilinear’, ’norm_in’,
’norm_out’, ’norm_average’, ’true’, ’false’, 1.0, 0.9, 0.0}
. DLLayerBatchNorm (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Batch normalization layer.
Example

create_dl_layer_input ('input', [224,224,3], [], [], DLLayerInput)


* In practice, one typically sets ['bias_term'], ['false'] for a convolution
* that is directly followed by a batch normalization layer.
create_dl_layer_convolution (DLLayerInput, 'conv1', 3, 1, 1, 64, 1, \
'none', 'none', ['bias_term'], ['false'], \
DLLayerConvolution)
create_dl_layer_batch_normalization (DLLayerConvolution, 'bn1', 0.9, \
0.0001, 'none', [], [], \
DLLayerBatchNorm)

HALCON 24.11.1.0
662 CHAPTER 9 DEEP LEARNING

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_dl_layer_convolution
Possible Successors
create_dl_layer_activation, create_dl_layer_convolution
References
Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift," Proceedings of the 32nd International Conference on Machine Learning, (ICML) 2015,
Lille, France, 6-11 July 2015, pp. 448–456
Module
Deep Learning Professional

create_dl_layer_class_id_conversion ( : : DLLayerInput,
LayerName, ConversionMode, GenParamName,
GenParamValue : DLLayerClassIdConversion )

Create a class ID conversion layer.


The operator create_dl_layer_class_id_conversion creates a class ID conversion layer whose han-
dle is returned in DLLayerClassIdConversion. The layer converts between the IDs used internally by the
network and the target / output class IDs.
The network internally uses consecutive integer values starting from 0 as IDs (the number of values depends
on the model type). In case the target / output class IDs differ from the internal IDs, this layer can be used
to convert between them. The target / output class IDs are stored in the model parameter ’class_ids’ (see
get_dl_model_param for more information on this parameter). If no ’class_ids’ are set, this layer copies
the input to the output.
The parameter ConversionMode specifies the conversion direction and accepts the following values:

• ’from_class_id’: Convert target / output class IDs into internal IDs. This mode is typically used after a target
input layer.
• ’to_class_id’: Convert internal IDs into target / output class IDs. This mode is typically used after an infer-
ence output layer.

The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_class_id_conversion


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 663

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’output_layer’ (DLLayerClassIdConversion) x
’shape’ x
’to_class_id’ (ConversionMode) x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ConversionMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Direction of the class ID conversion.
Default: ’from_class_id’
List of values: ConversionMode ∈ {’from_class_id’, ’to_class_id’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerClassIdConversion (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Class IDs conversion layer.
Example

* Example demonstrating the usage of


* create_dl_layer_class_id_conversion.
*
dev_update_off ()
set_system ('seed_rand', 42)
*
* Create simple segmentation model.
NumClasses := 3
InputShape := [32, 32, 3]
*
* Input feeding layers.
create_dl_layer_input ('image', InputShape, [], [], DLLayerInput)
create_dl_layer_input ('target', [InputShape[0],InputShape[1],1], [], [], \
DLLayerTarget)
create_dl_layer_class_id_conversion (DLLayerTarget, 'target_internal', \
'from_class_id', [], [], \
DLLayerTargetInternal)
* Feature extraction layers.
create_dl_layer_convolution (DLLayerInput, 'conv1', 3, 1, 1, 32, 1, \
'half_kernel_size', 'relu', [], [], \
DLLayerConv1)
create_dl_layer_convolution (DLLayerConv1, 'conv2', 3, 1, 1, 32, 1, \

HALCON 24.11.1.0
664 CHAPTER 9 DEEP LEARNING

'half_kernel_size', 'relu', [], [], \


DLLayerConv2)
* Output generation layers.
create_dl_layer_convolution (DLLayerConv2, 'conv_final', 1, 1, 1, \
NumClasses, 1, 'none', 'none', [], [], \
DLLayerConvFinal)
create_dl_layer_softmax (DLLayerConvFinal, 'softmax', [], [], \
DLLayerSoftMax)
create_dl_layer_depth_max (DLLayerSoftMax, 'output_internal', \
'argmax', [], [], DLLayerOutputInternal, _)
create_dl_layer_class_id_conversion (DLLayerOutputInternal, 'output', \
'to_class_id', [], [], DLLayerOutput)
* Loss layer.
create_dl_layer_loss_cross_entropy (DLLayerSoftMax, DLLayerTargetInternal, \
[], 'loss', 1.0, [], [], DLLayerLoss)
*
* Create the model.
create_dl_model ([DLLayerOutput, DLLayerLoss], DLModelHandle)
set_dl_model_param (DLModelHandle, 'type', 'segmentation')
set_dl_model_param (DLModelHandle, 'runtime', 'cpu')
*
* Test model on dummy example data.
read_image (Image, 'claudia')
zoom_image_size (Image, Image, InputShape[0], InputShape[1], 'constant')
convert_image_type (Image, Image, 'real')
*
* Fill target image with specific target class IDs.
ClassIDs := [42, 17, 5]
gen_image_const (Target, 'real', InputShape[0], InputShape[1])
paint_region (Target, Target, Target, ClassIDs[0], 'fill')
gen_rectangle1 (RectClass1, 1, 3, 16, 27)
paint_region (RectClass1, Target, Target, ClassIDs[1], 'fill')
gen_rectangle1 (RectClass2, 19, 1, 30, 30)
paint_region (RectClass2, Target, Target, ClassIDs[2], 'fill')
*
* Set class IDs in the model.
set_dl_model_param (DLModelHandle, 'class_ids', ClassIDs)
*
* Create test sample.
create_dict (DLSample)
set_dict_object (Image, DLSample, 'image')
set_dict_object (Target, DLSample, 'target')
*
* Train model for a few iterations. Note that training would not
* work without the first class ID conversion layer 'target_internal'.
for Idx := 1 to 100 by 1
train_dl_model_batch (DLModelHandle, DLSample, DLTrainResult)
endfor
*
* Apply model on test image. With the second class ID conversion
* layer 'output', the image now contains values according to the
* target IDs in segmentation_image.
apply_dl_model (DLModelHandle, DLSample, [], DLApplyResult)
get_dict_object (SegmentationImage, DLApplyResult, 'output')
dev_display (SegmentationImage)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 665

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_concat ( : : DLLayerInputs, LayerName, Axis,


GenParamName, GenParamValue : DLLayerConcat )

Create a concatenation layer.


The operator create_dl_layer_concat creates a concatenation layer whose handle is returned in
DLLayerConcat.
The parameter DLLayerInputs determines the feeding input layers. This layer expects multiple layers as input.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
A concatenation layer concatenates the data tensors of the input layers in DLLayerInputs and returns a sin-
gle data tensor DLLayerConcat. The parameter Axis specifies along which dimension the inputs should be
concatenated. The supported options for Axis are:

’batch’: Concatenation is applied along the batch-dimension.


Example: if you concatenate two inputs A and B of shape (h, w, d, b) = (1, 1, 1, 2), where A = [A0, A1] and B
= [B0, B1], you obtain the output [A0, A1, B0, B1] with shape (1, 1, 1, 4).
’batch_interleaved’: Concatenation is applied along the depth-dimension, but the output is reshaped as if the
data was concatenated along the batch-dimension. For this dimension, all inputs need to have exactly the
same shape.
Note that when the input batch_size is 1, the concatenation is identical for ’batch’ and
’batch_interleaved’.
Example: if you concatenate two inputs A and B of shape (h, w, d, b) = (1, 1, 1, 2), where A = [A0, A1] and B
= [B0, B1], you obtain the output [A0, B0, A1, B1] with shape (1, 1, 1, 4).
’depth’: Concatenation is applied along the depth-dimension.
Example: if you concatenate two inputs A and B of shape (h, w, d, b) = (1, 1, 1, 2), where A = [A0, A1] and B
= [B0, B1], you obtain the output [A0, A1, B0, B1] with shape (1, 1, 2, 2).
’height’: Concatenation is applied along the height-dimension.
’width’: Concatenation is applied along the width-dimension.

Note that all non-concatenated dimensions must be equal for all input data tensors.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_concat can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInputs) x
’name’ (LayerName) x x
’output_layer’ (DLLayerConcat) x
’shape’ x
’type’ x

HALCON 24.11.1.0
666 CHAPTER 9 DEEP LEARNING

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInputs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Feeding input layers.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Dimension along which the input layers are concatenated.
Default: ’depth’
List of values: Axis ∈ {’batch’, ’batch_interleaved’, ’depth’, ’height’, ’width’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerConcat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Concatenation layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_convolution ( : : DLLayerInput, LayerName,


KernelSize, Dilation, Stride, NumKernel, Groups, Padding,
Activation, GenParamName, GenParamValue : DLLayerConvolution )

Create a convolutional layer.


The operator create_dl_layer_convolution creates a convolutional layer with NumKernel kernels in
Groups filter groups whose handle is returned in DLLayerConvolution.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter KernelSize specifies the filter kernel in the dimensions width and height.
The parameter Dilation specifies the factor of filter dilation in the dimensions width and height.
The parameter Stride specifies how the filter is shifted.
The values for KernelSize, Dilation, and Stride can be set as

• a single value which is used for both dimensions


• a tuple [width, height] and [column, row], respectively.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 667

The parameter Groups specifies the number of filter groups.


The parameter NumKernel specifies the number of filter kernels. NumKernel must be a multiple of Groups.
The parameter Padding determines the padding, thus how many pixels with value 0 are appended on the border
of the processed input image. Supported values are:

• ’half_kernel_size’: The number of appended pixels depends on the specified KernelSize. More pre-
cisely, it is calculated as bKernelSize/2c, where for the padding on the left / right border the value of
KernelSize in dimension width is regarded and for the padding on the upper / lower border the value of
KernelSize in height.
• ’none’: No pixels are appended.
• Number of pixels: Specify the number of pixels appended on each border. To do so, the following tuple
lengths are supported:
– Single number: Padding in all four directions left/right/top/bottom.
– Two numbers: Padding in left/right and top/bottom: [l/r, t/b].
– Four numbers: Padding on left, right, top, bottom side: [l,r,t,b].
Restriction: ’runtime’ ’gpu’ does not support asymmetric padding, i.e., the padding values for the left
and right side must be equal, as well as the padding values for the top and bottom side.
Restriction: The integer padding values must be smaller than the value set for KernelSize in the corre-
sponding dimension.

The output dimensions of the convolution layer are given by


input_dim + 2(padding_begin + padding_end)
output_dim =
Stride

KernelSize + (KernelSize − 1)(Dilation − 1)
− +1
Stride

Thereby we use the following values: output_dim: output width/height, input_dim: input width/height,
padding_begin: number of pixels added to the left/top of the input image, and padding_end: number of pix-
els added to the right/bottom of the input image.
The parameter Activation determines whether an activation is performed after the convolution in order to
optimize the runtime performance. The following values are supported:

• ’relu’: perform a ReLU activation after the convolution.


It is possible to specify an upper bound to the ReLU operation (see create_dl_layer_activation)
via the generic parameter ’upper_bound’. Note: It is not possible to specify a leaky ReLU.
• ’none’: no activation operation is performed.

We refer to the “Solution Guide on Classification” for more general information about the convo-
lution layer and the reference given below for more detailed information about the arithmetic of the layer.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’bias_filler’: See ’weight_filler’ for an explanation of the values.


List of values: ’xavier’, ’msra’, ’const’.
Default: ’const’
’bias_filler_variance_norm’: See ’weight_filler_variance_norm’ for an explanation of the values.
List of values: ’norm_in’, ’norm_out’, ’norm_average’.
Default: ’norm_out’
’bias_filler_const_val’: Specifies the constant bias term initialization value if ’bias_filler’ has been set to ’const’.
Restriction: Ignored for other values of ’bias_filler’.
Default: 0
’bias_term’: Determines whether the created convolutional layer has a bias term (’true’) or not (’false’).
Default: ’true’

HALCON 24.11.1.0
668 CHAPTER 9 DEEP LEARNING

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’learning_rate_multiplier’: Multiplier for the learning rate for this layer that is used during training. If ’learn-
ing_rate_multiplier’ is set to 0.0, the layer is skipped during training.
Default: 1.0
’learning_rate_multiplier_bias’: Multiplier for the learning rate of the bias term. The total bias learning rate is
the product of ’learning_rate_multiplier_bias’ and ’learning_rate_multiplier’.
Default: 1.0
’upper_bound’: Float value, which defines the upper bound for ReLU. To unset the upper bound, set ’up-
per_bound’ to an empty tuple.
Default: []
’weight_filler’: This parameter defines the mode how the weights are initialized. The following values are sup-
ported:
• ’const’: The weights are filled with constant values.
• ’msra’: The weights are drawn from a Gaussian distribution.
• ’xavier’: The weights are drawn from a uniform distribution.
Default: ’xavier’
’weight_filler_const_val’: Specifies the constant weight initialization value.
Restriction: Only applied if ’weight_filler’ = ’const’.
Default: 0.5
’weight_filler_variance_norm’: This parameter determines the value range for ’weight_filler’. The following val-
ues are supported:
• ’norm_average’: the values are based on the average of the input and output size
• ’norm_in’: the values are based on the input size
• ’norm_out’: the values are based on the output size.
Default: ’norm_in’

Certain parameters of layers created using create_dl_layer_convolution can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’activation_mode’ (Activation) x
’dilation’ (Dilation) x
’groups’ (Groups) x
’input_depth’ x
’input_layer’ (DLLayerInput) x
’kernel_size’ (KernelSize) x
’name’ (LayerName) x x
’num_kernels’ (NumKernel) x
’output_layer’ (DLLayerConvolution) x
’padding’ (Padding) x
’padding_type’ (Padding) x
’shape’ x
’stride’ (Stride) x
’type’ x

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 669

Generic Layer Parameters set get


’bias_filler’ x x
’bias_filler_const_val’ x x
’bias_filler_variance_norm’ x x
’bias_term’ x
’is_inference_output’ x x
’learning_rate_multiplier’ x x
’learning_rate_multiplier_bias’ x x
’num_trainable_params’ x
’upper_bound’ x x
’weight_filler’ x x
’weight_filler_const_val’ x x
’weight_filler_variance_norm’ x x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. KernelSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Width and height of the filter kernels.
Default: 3
. Dilation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Amount of filter dilation for width and height.
Default: 1
. Stride (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Amount of filter shift in width and height direction.
Default: 1
. NumKernel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of filter kernels.
Default: 64
. Groups (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of filter groups.
Default: 1
. Padding (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string / integer
Padding type or specific padding size.
Default: ’none’
List of values: Padding ∈ {’none’, ’half_kernel_size’, [all], [width,height], [left,right,top,bottom]}
Suggested values: Padding ∈ {’none’, ’half_kernel_size’}
. Activation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string
Enable optional ReLU or sigmoid activations.
Default: ’none’
List of values: Activation ∈ {’none’, ’relu’, ’sigmoid’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’weight_filler’, ’weight_filler_variance_norm’,
’weight_filler_const_val’, ’bias_filler’, ’bias_filler_variance_norm’, ’bias_filler_const_val’, ’bias_term’,
’is_inference_output’, ’learning_rate_multiplier’, ’learning_rate_multiplier_bias’, ’upper_bound’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’xavier’, ’msra’, ’const’, ’nearest_neighbor’, ’bilinear’, ’norm_in’,
’norm_out’, ’norm_average’, ’true’, ’false’, 1.0, 0.9, 0.0}

HALCON 24.11.1.0
670 CHAPTER 9 DEEP LEARNING

. DLLayerConvolution (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle


Convolutional layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
References
V. Dumoulin, F. Visin: "A guide to convolution arithmetic for deep learning", 2018, http:
//arxiv.org/abs/1603.07285
Module
Deep Learning Professional

create_dl_layer_dense ( : : DLLayerInput, LayerName, NumOut,


GenParamName, GenParamValue : DLLayerDense )

Create a dense layer.


The operator create_dl_layer_dense creates a dense or fully connected layer (sometimes also called
gemm) with NumOut output neurons whose handle is returned in DLLayerDense.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’bias_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its val-
ues.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’const’
’bias_filler_const_val’: Constant value if ’bias_filler’ = ’const’.
Default: 0
’bias_filler_variance_norm’: See create_dl_layer_convolution for a detailed explanation of this pa-
rameter and its values.
List of values: ’norm_out’, ’norm_in’, ’norm_average’, or constant value (in combination with ’bias_filler’
= ’msra’).
Default: ’norm_out’
’bias_term’: Determines whether the created dense layer has a bias term (’true’) or not (’false’).
Default: ’true’
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’learning_rate_multiplier’: Multiplier for the learning rate for this layer that is used during training. If ’learn-
ing_rate_multiplier’ is set to 0.0, the layer is skipped during training.
Default: 1.0
’learning_rate_multiplier_bias’: Multiplier for the learning rate of the bias term. The total bias learning rate is
the product of ’learning_rate_multiplier_bias’ and ’learning_rate_multiplier’.
Default: 1.0
’weight_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its
values.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’xavier’

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 671

’weight_filler_const_val’: See create_dl_layer_convolution for a detailed explanation of this parame-


ter and its values.
Default: 0.5
’weight_filler_variance_norm’: See create_dl_layer_convolution for a detailed explanation of this pa-
rameter and its values.
List of values: ’norm_in’, ’norm_out’, ’norm_average’, or constant value (in combination with ’bias_filler’
= ’msra’).
Default: ’norm_in’

Certain parameters of layers created using create_dl_layer_dense can be set and retrieved us-
ing further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’neurons_in’ x
’neurons_out’ (NumOut) x
’output_layer’ (DLLayerDense) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’bias_filler’ x x
’bias_filler_const_val’ x x
’bias_filler_variance_norm’ x x
’bias_term’ x
’is_inference_output’ x x
’learning_rate_multiplier’ x x
’learning_rate_multiplier_bias’ x x
’num_trainable_params’ x
’weight_filler’ x x
’weight_filler_const_val’ x x
’weight_filler_variance_norm’ x x

Parameters

. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle


Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. NumOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of output neurons.
Default: 100
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’weight_filler’, ’weight_filler_variance_norm’,
’weight_filler_const_val’, ’bias_filler’, ’bias_filler_variance_norm’, ’bias_filler_const_val’, ’bias_term’,
’is_inference_output’, ’learning_rate_multiplier’, ’learning_rate_multiplier_bias’}

HALCON 24.11.1.0
672 CHAPTER 9 DEEP LEARNING

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real


Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’xavier’, ’msra’, ’const’, ’nearest_neighbor’, ’bilinear’, ’norm_in’,
’norm_out’, ’norm_average’, ’true’, ’false’, 1.0, 0.9, 0.0}
. DLLayerDense (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Dense layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Module
Deep Learning Professional

create_dl_layer_depth_max ( : : DLLayerInput, LayerName,


DepthMaxMode, GenParamName, GenParamValue : DLLayerDepthMaxArg,
DLLayerDepthMaxValue )

Create a depth max layer.


The operator create_dl_layer_depth_max creates a depth max layer.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
There are two possible output layers depending on DepthMaxMode:

• DLLayerDepthMaxArg: Handle to a depth max layer with mode ’argmax’.


• DLLayerDepthMaxValue: Handle to a depth max layer with mode ’value’.

Note, these parameters only need to be set in case such an output layer is requested (see DepthMaxMode).
The parameter LayerName defines the name of the output layer(s) depending on DepthMaxMode:

• ’argmax’: name of DLLayerDepthMaxArg.


• ’value’: name of DLLayerDepthMaxValue.
• ’argmax_and_value’: name of DLLayerDepthMaxArg, while the layer DLLayerDepthMaxValue re-
ceives the same name with the suffix string ’_value’ appended to it.

Note that if creating a model using create_dl_model each layer of the created network must have a unique
name.
The mode DepthMaxMode indicates which depth max value is actually returned as output. The following values
are supported:

’argmax’: newline The depth index of the maximal value is returned in DLLayerDepthMaxArg.
’value’: newline The maximal value itself is returned in DLLayerDepthMaxValue.
’argmax_and_value’: newline Both are returned, the depth index of the maximal value in the output layer
DLLayerDepthMaxArg, and the maximal value itself in the output layer DLLayerDepthMaxValue.

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 673

Certain parameters of layers created using this operator create_dl_layer_depth_max can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’mode’ (DepthMaxMode) x x
’name’ (LayerName) x x
’output_layer’ (DLLayerDepthMaxArg and/or x
DLLayerDepthMaxValue)
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. DepthMaxMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode to indicate type of return value.
Default: ’argmax’
List of values: DepthMaxMode ∈ {’argmax’, ’value’, ’argmax_and_value’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerDepthMaxArg (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Optional, depth max layer with mode ’argmax’.
. DLLayerDepthMaxValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Optional, depth max layer with mode ’value’.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_depth_to_space ( : : DLLayerInput, LayerName,


BlockSize, Mode, GenParamName,
GenParamValue : DLLayerDepthToSpace )

Create a depth to space layer.

HALCON 24.11.1.0
674 CHAPTER 9 DEEP LEARNING

The operator create_dl_layer_depth_to_space creates a depth to space layer whose handle is returned
in DLLayerDepthToSpace.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
This layer rearranges the elements of the feeding tensor of shape (N, C ∗ r2 , H, W ) to a tensor of shape (N, C, H ∗
r, W ∗ r). Thereby r can be considered an upscale factor, which is set with BlockSize.
The output element (depth, row, col) is mapped from the input element (depth ∗ r2 + (row%r) ∗ r +
col%r, row/r, col/r).
With Mode the ordering in the output tensor is set. Currently only the ’column_row_depth’ order described above
is available.
Certain parameters of layers created using this operator create_dl_layer_depth_to_space
can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’block_size’ (BlockSize) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. BlockSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Block size (i.e., upscale factor).
Default: 3
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Ordering mode.
Default: ’column_row_depth’
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerDepthToSpace (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Depth to space layer.
Example

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 675

InputShape := [16, 16, 3]


Upscale := 2
*
create_dl_layer_input ('input', InputShape, [], [], DLLayerInput)
* Create a convolutional layer, that generates Upscale^2*NumChannel feature maps.
create_dl_layer_convolution (DLLayerInput, 'conv1', 3, 1, 1,\
Upscale * Upscale * InputShape[2],\
1, 'half_kernel_size', 'none',\
[], [], DLLayerConvolution)
* Use a depth to space layer to combine Upscale^2 feature maps to upscale.
create_dl_layer_depth_to_space (DLLayerConvolution, 'upscaled', Upscale,\
'column_row_depth',[], [],\
DLLayerDepthToSpace)
* The output shape of DLLayerDepthToSpace is now [16*Upscale, 16*Upscale, 3].
create_dl_model (DLLayerDepthToSpace, DLModel)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_dl_layer_input, create_dl_layer_concat, create_dl_layer_reshape
Possible Successors
create_dl_layer_convolution, create_dl_layer_dense, create_dl_layer_reshape
See also
create_dl_layer_reshape
Module
Deep Learning Professional

create_dl_layer_dropout ( : : DLLayerInput, LayerName,


Probability, GenParamName, GenParamValue : DLLayerDropOut )

Create a DropOut layer.


The operator create_dl_layer_dropout creates a DropOut layer with probability Probability and
returns the handle DLLayerDropOut.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
During training, activations within DLLayerInput are set to zero with probability Probability. All other
activations are rescaled with (1 - Probability).
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_dropout can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

HALCON 24.11.1.0
676 CHAPTER 9 DEEP LEARNING

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’output_layer’ (DLLayerDropOut) x
’probability’ (Probability) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Probability (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Probability.
Default: 0.5
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerDropOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
DropOut layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_elementwise ( : : DLLayerInputs, LayerName,


Operation, Coefficients, GenParamName,
GenParamValue : DLLayerElementWise )

Create an elementwise layer.


The operator create_dl_layer_elementwise creates an element-wise layer whose handle is returned in
DLLayerElementWise.
An elementwise layer applies a certain operation to every data tensor of the input layers handles and to each
element of the data tensor. As a consequence, all input data tensors should be of the same shape and the output
tensor has the same shape as the first input tensor.
The parameter DLLayerInputs determines the feeding input layers. This layer expects multiple layers as input.
For Operation = ’division’ exactly two input layers are expected.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 677

The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Operation specifies the operation that is applied. Depending on Operation, the layer supports
implicit broadcasting. I.e., if one of the shape dimensions (batch_size, depth, height, width) of the
second or any of the following input tensors is 1, the values are implicitly multiplied along that dimension to
match the shape of the first input. The supported values are:

• ’division’: Element-wise division. Broadcasting is fully supported.


• ’maximum’: Element-wise maximum. Broadcasting is fully supported.
• ’minimum’: Element-wise minimum. Broadcasting is fully supported.
• ’product’: Element-wise product. Broadcasting is supported, but all inputs following the second input must
have the same shape as the second input.
• ’sum’: Element-wise summation. Broadcasting is not supported.

The optional parameter Coefficients determines a weighting coefficient for every input tensor. The number of
values in Coefficients must match the number of feeding layers in DLLayerInputs. Set Coefficients
equal to [] if no coefficients shall be used in the element-wise operation.
Restriction: No coefficients can be set for Operation = ’product’.
Example: for Operation = ’sum’, the i-th element of the output data tensor is given by

N
X −1
output[i] = Coefficients[n] · DLLayerInputsn [i],
n=0

where N is the number of input data tensors.


The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’div_eps’: Small scalar value that is added to the elements of the denominator to avoid a division by zero (for
Operation = ’division’).
Default: 1e-10
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_elementwise


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’coefficients’ (Coefficients) x
’input_layer’ (DLLayerInputs) x
’name’ (LayerName) x x
’operation’ (Operation) x
’output_layer’ (DLLayerElementWise) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’div_eps’ x x
’is_inference_output’ x x
’num_trainable_params’ x

HALCON 24.11.1.0
678 CHAPTER 9 DEEP LEARNING

Parameters
. DLLayerInputs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Feeding input layers.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Element-wise operations.
Default: ’sum’
List of values: Operation ∈ {’division’, ’maximum’, ’minimum’, ’product’, ’sum’}
. Coefficients (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Optional input tensor coefficients.
Default: []
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerElementWise (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Elementwise layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_identity ( : : DLLayerInput, LayerName,


GenParamName, GenParamValue : DLLayerIdentity )

Create an identity layer.


The operator create_dl_layer_identity creates an identity layer whose handle is returned in
DLLayerIdentity.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_identity can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 679

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’output_layer’ (DLLayerIdentity) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerIdentity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Identity layer.
Example

* Create a model that concatinates the output of a convolution layer.


create_dl_layer_input ('input', [10,10,3], [], [], DLLayerInput)
create_dl_layer_convolution (DLLayerInput, 'conv', 3, 1, 1, 8, 1, 'none', \
'none', [], [], DLLayerConvolution)
* Using the same layer multiple times as input does not work, so make a copy.
create_dl_layer_identity (DLLayerConvolution, 'conv_copy', [], [], \
DLLayerIdentity)
create_dl_layer_concat ([DLLayerConvolution, DLLayerIdentity], 'concat', \
'depth', [], [], DLLayerConcat)
create_dl_model (DLLayerConcat, DLModelHandle)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Successors
create_dl_layer_elementwise, create_dl_layer_concat
Module
Deep Learning Professional

create_dl_layer_input ( : : LayerName, Shape, GenParamName,


GenParamValue : DLLayerInput )

Create an input layer.

HALCON 24.11.1.0
680 CHAPTER 9 DEEP LEARNING

The operator create_dl_layer_input creates an input layer with spatial dimensions given by Shape whose
handle is returned in DLLayerInput.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
When the created model is applied using e.g., apply_dl_model or train_dl_model_batch, it must be
possible to map an input with its corresponding input layer. Operators applying a model expect a feeding dictionary
DLSample, see Deep Learning / Model. The mentioned mapping is done using dictionary entries, where the key
matches the input layer name. Thus, for an input of this layer a sample dictionary will need an entry with the key
LayerName (except if the ’input_type’ is set to ’constant’, see below).
The parameter Shape defines the shape of the input values (the values given in the feeding dictionary DLSample)
and must be a tuple of length three, containing width, height, and depth of the input. The tuple values must
be given as integer values and have have different meaning depending on the input type:

• for an input image the layer Shape defines the image size. Images shall be given with type real (for
information on image types see Image).
• for an input tuple its length will need to match the product of the individual values in Shape, i.e., width ×
height × depth.
Tuple values are distributed along the column- (width), row- (height), and depth-axes in this order.
Input tuple values can be given either as integer or real.

The batch size has to be set later with set_dl_model_param, once the model has been created by
create_dl_model.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’allow_smaller_tuple’: For tuple inputs, setting ’allow_smaller_tuple’ to ’true’ allows to have an input tuple with
less values than the total dimension given by Shape. E.g., this can be the case if an input corresponds to the
number of objects within one image and the number of objects changes from image to image. If fewer than
the maximum number of values given by the total dimension of Shape are present, the remaining values are
set to zero.
Shape should be set such that it fits the maximum expected length. For the example above this would be the
maximum number of objects within one image present in the whole dataset.
Default: ’false’.
’const_val’: Constant output value.
Restriction:
Only an integer or float is settable. This value is only settable or gettable if ’input_type’ is set to ’constant’.
Default: 0.0.
’input_type’: Defines the type of input that is expected. The following values are possible:
’default’: The layer expects a number of input images corresponding to the batch size.
’region_to_bin’: The layer expects a tuple of regions as input and internally converts it to a binary image
where each region is encoded in one depth channel. Regions reaching out of the given dimensions are
clipped to the width and height given by Shape. The maximum number of regions is defined by the
depth of Shape. If fewer than the maximum number of regions are given, the output is filled up with
empty (zero) images. For example, this can be the case if the regions are corresponding to objects within
an image and the number of objects changes from image to image.
’constant’: The layer does not expect any key value pair in the input dictionary. Instead all entries within the
output of this layer are filled with the value given by ’const_val’.
Default: ’default’.
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using create_dl_layer_input can be set and retrieved us-
ing further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 681

Layer Parameters set get


’input_layer’ x
’name’ (LayerName) x x
’output_layer’ (DLLayerInput) x
’shape’ (Shape) x
’type’ x

Generic Layer Parameters set get


’allow_smaller_tuple’ x
’const_val’ (x) (x)
’input_type’ x
’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Dimensions of the input (width, height, depth).
Default: [224,224,3]
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’allow_smaller_tuple’, ’const_val’, ’input_type’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {0.0, ’constant’, ’default’, ’false’, ’region_to_bin’, ’true’}
. DLLayerInput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
Example

* Create a model for summation.


create_dl_layer_input ('input_a', [2, 3, 4], [], [], DLLayerInputA)
create_dl_layer_input ('input_b', [2, 3, 4], [], [], DLLayerInputB)
create_dl_layer_elementwise ([DLLayerInputA, DLLayerInputB], 'sum', \
'sum', [], [], [], DLLayerElementWise)
create_dl_model (DLLayerElementWise, DLModel)
set_dl_model_param (DLModel, 'runtime', 'cpu')
*
* Add 'input_a' as an inference model output.
set_dl_model_layer_param (DLModel, 'input_a', 'is_inference_output', 'true')
*
* Feed input data as tuple (a) or image (b).
create_dict (Sample)
set_dict_tuple (Sample, 'input_a', [1:(2*3*4)])
gen_empty_obj (InputB)
for I := 1 to 4 by 1
gen_image_const (Channel, 'real', 2, 3)
get_region_points (Channel, Rows, Cols)
set_grayval (Channel, Rows, Cols, gen_tuple_const(|Rows|, I))
append_channel (InputB, Channel, InputB)
endfor
set_dict_object (InputB, Sample, 'input_b')
*

HALCON 24.11.1.0
682 CHAPTER 9 DEEP LEARNING

* Apply the model for summation and get results.


set_dl_model_param (DLModel, 'batch_size', 2)
apply_dl_model (DLModel, [Sample,Sample], [], Result)
get_dict_object (Sum, Result[0], 'sum')
get_dict_object (TupleInputA, Result[1], 'input_a')

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
create_dl_layer_activation, create_dl_layer_batch_normalization,
create_dl_layer_class_id_conversion, create_dl_layer_class_id_conversion,
create_dl_layer_concat, create_dl_layer_convolution, create_dl_layer_dense,
create_dl_layer_depth_max, create_dl_layer_dropout,
create_dl_layer_elementwise, create_dl_layer_loss_cross_entropy,
create_dl_layer_loss_ctc, create_dl_layer_loss_distance,
create_dl_layer_loss_focal, create_dl_layer_loss_huber, create_dl_layer_lrn,
create_dl_layer_pooling, create_dl_layer_reduce, create_dl_layer_reshape,
create_dl_layer_softmax, create_dl_layer_transposed_convolution,
create_dl_layer_zoom_factor, create_dl_layer_zoom_size,
create_dl_layer_zoom_to_layer_size
Module
Deep Learning Professional

create_dl_layer_loss_cross_entropy ( : : DLLayerInput,
DLLayerTarget, DLLayerWeights, LayerName, LossWeight,
GenParamName, GenParamValue : DLLayerLossCrossEntropy )

Create a cross entropy loss layer.


The operator create_dl_layer_loss_cross_entropy creates a cross entropy loss layer whose handle
is returned in DLLayerLossCrossEntropy. This layer computes the two dimensional cross entropy loss on
the input (provided by DLLayerInput) given the corresponding target (provided by DLLayerTarget) and
weight (provided by DLLayerWeights).
Cross entropy is commonly used to measure the similarity between two vectors.
Example: Illustrative example, where we have a pixel-level classification problem with three classes.
The input vector for a single pixel is x = [0.7, 0.1, 0.2] (e.g., the output of a softmax layer) which means that
the predicted value (e.g., probability) is 0.7 for the class at index 0, 0.1 for the class at index 1 and 0.2 for the
class at index 2.
The target vector is t = [1.0, 0.0, 0.0] with a probability of 1.0 for the actual class and 0.0 else. Entropy is
calculated by the dot product of these two vectors. Since the target vector has only one non-zero entry, it can
be given by the index of the actual class instead of a vector, in this case t = 0 .
The cross entropy is then simply the value of the input vector at the target class index, hence x[t] = 0.7 .
Using this simplification, the cross entropy loss function over an input image can be defined by

N −1
1 X
Lcross_entropy (x, t, w) := − wi · xi [ti ],
W i=0

where the input x consists of one prediction vector xi for each pixel, the target t and weight w consist of
PN −1
one value ti and wi for each input pixel, N is the number of pixels and W = i=0 wi is the sum over all
weights.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 683

Hence, this layer expects multiple incoming layers:

• DLLayerInput: Specifies the prediction (e.g., a softmax layer, commonly with logarithmized results).
• DLLayerTarget: Specifies the target sequences (originating from the ground truth information).
• DLLayerWeights: Specifies the weight sequences. This parameter is optional. If an empty tuple [] is
passed for all values the weighting factor 1.0 is used.

The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter LossWeight determines the scalar weight factor with which the loss, calculated in this layer, is
multiplied. This parameter can be used to specify the contribution of the cross entropy loss to the overall network
loss in case multiple loss layers are used.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_loss_cross_entropy


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput, DLLayerTarget, x
and/or DLLayerWeights)
’loss_weight’ (LossWeight) x x
’name’ (LayerName) x x
’output_layer’ (DLLayerLossCrossEntropy) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Overall loss weight if there are multiple losses in the network.
Default: 1.0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}

HALCON 24.11.1.0
684 CHAPTER 9 DEEP LEARNING

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real


Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLossCrossEntropy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Cross entropy loss layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_loss_ctc ( : : DLLayerInput, DLLayerInputLengths,


DLLayerTarget, DLLayerTargetLengths, LayerName, GenParamName,
GenParamValue : DLLayerLossCTC )

Create a CTC loss layer.


The operator create_dl_layer_loss_ctc creates a Connectionist Temporal Classification (CTC) loss layer
whose handle is returned in DLLayerLossCTC. See the reference cited below for information about the CTC
loss.
With this loss layer it is possible to train sequence to sequence models (Seq2Seq). E.g., it can be used to train
a model that is able to read text in an image. In order to do so, the sequences are compared, thus the de-
termined network prediction DLLayerInput with sequence length DLLayerInputLengths to the given
DLLayerTarget with sequence length DLLayerTargetLengths.
The following variables are important to understand the input shapes:

• T: Maximum input sequence length (i.e., width of DLLayerInput)


• S: Maximum output sequence length (i.e., width of DLLayerTarget)
• C: Number of classes including 0 as the blank class ID (i.e., depth of DLLayerInput)

This layer expects multiple layers as input:

• DLLayerInput: Specifies the network prediction.


Shape: [T,1,C]
• DLLayerInputLengths: Specifies the input sequence length of each item in the batch.
Shape: [1,1,1]
• DLLayerTarget: Specifies the target sequences.
Shape: [S,1,1]
• DLLayerTargetLengths: Input layer which specifies the target sequence length of each item in the
batch.
Shape: [1,1,1]

The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The CTC loss is typically applied in a CNN as follows. The input sequence is expected to be encoded in some CNN
layer with the output shape [width: T, height: 1, depth: C]. Typically the end of a large fully convolutional
classifier is pooled in height down to 1 with an average pooling layer. It is important that the last layer is
wide enough to hold enough information. In order to obtain the sequence prediction in the output depth a 1x1
convolutional layer is added after the pooling with the number of kernels set to C. In this use case the CTC loss
obtains this convolutional layer as input layer DLLayerInput. The width of the input layer determines the
maximum output sequence of the model.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 685

The CTC loss can be applied to a batch of input items with differing input and target sequence lengths. T and S
are the maximum lengths. In DLLayerInputLengths and DLLayerTargetLengths the individual length
of each item in a batch needs to be specified.

Restrictions

• A model containing this layer cannot be trained on a CPU.


• A model containing this layer cannot be trained with a ’batch_size_multiplier’ != 1.0.
• The input layer DLLayerInput must not be a softmax layer. The softmax calculation is done internally
in this layer. For inference, there should be an extra softmax layer connected to the DLLayerInput
(see create_dl_layer_softmax).

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_loss_ctc can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput, x
DLLayerInputLengths, DLLayerTarget, and/or
DLLayerTargetLengths)
’name’ (LayerName) x x
’output_layer’ (DLLayerLossCTC) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters

. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle


Input layer with network predictions.
. DLLayerInputLengths (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer which specifies the input sequence length of each item in the batch.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer which specifies the target sequences. If the input dimensions of the CNN are changed the width of
this layer is automatically resized to the same width as the DLLayerInput layer.
. DLLayerTargetLengths (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer which specifies the target sequence length of each item in the batch.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}

HALCON 24.11.1.0
686 CHAPTER 9 DEEP LEARNING

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real


Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLossCTC (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
CTC loss layer.
Example

* Create a simple Seq2Seq model which overfits to a single output sequence.

* Input sequence length


T := 6
* Number of classes including blank (blank is always class_id: 0)
C := 3
* Batch Size
N := 1
* Maximum length of target sequences
S := 3

* Model creation
create_dl_layer_input ('input', [T,1,1], [], [], Input)
create_dl_layer_dense (Input, 'dense', T*C, [], [], DLLayerDense)
create_dl_layer_reshape (DLLayerDense, 'dense_reshape', [T,1,C], [], [],\
ConvFinal)

* Training part

* Specify the shapes without batch-size


* (batch-size will be specified in the model).
create_dl_layer_input ('ctc_input_lengths', [1,1,1], [], [],\
DLLayerInputLengths)
create_dl_layer_input ('ctc_target', [S,1,1], [], [], DLLayerTarget)
create_dl_layer_input ('ctc_target_lengths', [1,1,1], [], [],\
DLLayerTargetLengths)
* Create the loss layer
create_dl_layer_loss_ctc (ConvFinal, DLLayerInputLengths, DLLayerTarget,\
DLLayerTargetLengths, 'ctc_loss', [], [],\
DLLayerLossCTC)

* Get all names so that users can set values


get_dl_layer_param (ConvFinal, 'name', CTCInputName)
get_dl_layer_param (DLLayerInputLengths, 'name', CTCInputLengthsName)
get_dl_layer_param (DLLayerTarget, 'name', CTCTargetName)
get_dl_layer_param (DLLayerTargetLengths, 'name', CTCTargetLengthsName)

* Inference part
create_dl_layer_softmax (ConvFinal, 'softmax', [], [], DLLayerSoftMax)
create_dl_layer_depth_max (DLLayerSoftMax, 'prediction', 'argmax', [], [],\
DLLayerDepthMaxArg, _)

* Setting a seed because the weights of the network are randomly initialized
set_system ('seed_rand', 35)

create_dl_model ([DLLayerLossCTC,DLLayerDepthMaxArg], DLModel)

set_dl_model_param (DLModel, 'batch_size', N)


set_dl_model_param (DLModel, 'runtime', 'gpu')
set_dl_model_param (DLModel, 'learning_rate', 1)

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 687

* Create input sample for training


InputSequence := [0,1,2,3,4,5]
TargetSequence := [1,2,1]
create_dict (InputSample)
set_dict_tuple (InputSample, 'input', InputSequence)
set_dict_tuple (InputSample, 'ctc_input_lengths', |InputSequence|)
set_dict_tuple (InputSample, 'ctc_target', TargetSequence)
set_dict_tuple (InputSample, 'ctc_target_lengths', |TargetSequence|)
Eps := 0.01

PredictedSequence := []
dev_inspect_ctrl ([InputSequence, TargetSequence, CTCLoss, PredictedValues,\
PredictedSequence])
MaxIterations:= 15
for I := 0 to MaxIterations by 1
apply_dl_model (DLModel, InputSample, ['prediction','softmax'], \
DLResultBatch)
get_dict_object (Softmax, DLResultBatch, 'softmax')
get_dict_object (Prediction, DLResultBatch, 'prediction')
PredictedValues := []
for t := 0 to T-1 by 1
get_grayval (Prediction, 0, t, PredictionValue)
PredictedValues := [PredictedValues, PredictionValue]
endfor
train_dl_model_batch (DLModel, InputSample, DLTrainResult)

get_dict_tuple (DLTrainResult, 'ctc_loss', CTCLoss)


if (CTCLoss < Eps)
break
endif
stop()
endfor

* Rudimentary implementation of fastest path prediction


PredictedSequence := []
LastV := -1
for I := 0 to |PredictedValues|-1 by 1
V := PredictedValues[I]
if (V == 0)
LastV := -1
continue
endif
if (|PredictedSequence| > 0 and V == LastV)
continue
endif
PredictedSequence := [PredictedSequence, V]
LastV := PredictedSequence[|PredictedSequence|-1]
endfor

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

References
Graves Alex et al., "Connectionist temporal classification: labelling unsegmented sequence data with recurrent
neural networks." Proceedings of the 23rd international conference on Machine learning. 2006.

HALCON 24.11.1.0
688 CHAPTER 9 DEEP LEARNING

Module
Deep Learning Professional

create_dl_layer_loss_distance ( : : DLLayerInput, DLLayerTarget,


DLLayerWeights, LayerName, DistanceType, LossWeight, GenParamName,
GenParamValue : DLLayerLossDistance )

Create a distance loss layer.


The operator create_dl_layer_loss_distance creates a distance loss layer whose handle is returned in
DLLayerLossDistance.
This layer expects multiple layers as input:

• DLLayerInput: Specifies the prediction (e.g., a softmax layer).


• DLLayerTarget: Specifies the target sequences (originating from the ground truth information).
• DLLayerWeights: Specifies the weight sequences. This parameter is optional. If an empty tuple [] is
passed for all values the weighting factor 1.0 is used.

The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter LossWeight is an overall loss weight if there are multiple losses in the network.
The parameter DistanceType determines which distance measure is applied. Currently, ’l2’ and ’l1’ are imple-
mented. Depending on the generic parameter ’reduce’ this results in

• L2 loss distance as a tensor:

DLLayerLossDistance[i] = 0.5 · LossWeight · DLLayerWeights[i]


·(DLLayerInput[i] − DLLayerTarget[i])2 ,

in this case loss is the tensor of the same size as DLLayerInput.


• L2 loss distance as a scalar:
PN −1
i=0DLLayerLossDistance[i]
DLLayerLossDistance = PN −1 ,
i=0 DLLayerWeights[i]

where N is a number of elements in DLLayerInput.


• L1 loss distance as a tensor:

DLLayerLossDistance[i] = LossWeight · DLLayerWeights[i]


·|DLLayerInput[i] − DLLayerTarget[i]|,

in this case loss is the tensor of the same size as DLLayerInput.


• L1 loss distance as a scalar:
PN −1
i=0DLLayerLossDistance[i]
DLLayerLossDistance = PN −1 ,
i=0 DLLayerWeights[i]

where N is a number of elements in DLLayerInput.

Thus DLLayerInput, DLLayerTarget and DLLayerWeights should have the same size. Setting the
weights in DLLayerWeights to 1 will result in a loss normalized over the number of elements.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 689

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’reduce’: Determines whether the output of the layer is reduced:
• ’true’: The output is reduced to a scalar.
• ’false’: The output of the layer is a tensor, where each element is a ’per-pixel’ loss (squared differences).
Default: ’true’.

Certain parameters of layers created using this operator create_dl_layer_loss_distance


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput, DLLayerTarget, x
and/or DLLayerWeights)
’loss_weight’ (LossWeight) x x
’name’ (LayerName) x x
’output_layer’ (DLLayerLossDistance) x
’shape’ x
’type’ x
’distance_type’ (DistanceType) x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x
’reduce’ x x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. DistanceType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of distance.
Default: ’l2’
List of values: DistanceType ∈ {’l2’, ’l1’}
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Loss weight. Applies to all losses, if several losses occur in the network.
Default: 1.0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’, ’reduce’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}

HALCON 24.11.1.0
690 CHAPTER 9 DEEP LEARNING

. DLLayerLossDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle


Distance loss layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_loss_focal ( : : DLLayerInput, DLLayerTarget,


DLLayerWeights, DLLayerNormalization, LayerName, LossWeight,
Gamma, ClassWeights, Type, GenParamName,
GenParamValue : DLLayerLossFocal )

Create a focal loss layer.


The operator create_dl_layer_loss_focal creates a focal loss layer whose handle is returned in
DLLayerLossFocal. See the reference cited below for further information about its definition and parame-
ter meanings.
This layer expects multiple layers as input:

• DLLayerInput: Specifies the prediction (e.g., a sigmoid or softmax layer).


• DLLayerTarget: Specifies the target sequences (originating from the ground truth information).
• DLLayerWeights: Specifies the weight sequences. This parameter is optional. If an empty tuple [] is
passed for all values the weighting factor 1.0 is used.
• DLLayerNormalization: Specifies the factor to normalize the loss. This parameter is optional, it can
be given by the layer handle as value or ignored handing over an empty tuple [].

The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter LossWeight is a overall loss weight if there are multiple losses in the network.
The parameter Gamma is the exponent of the focal factor.
The parameter ClassWeights defines class specific weights. All loss contributions of foreground samples
of a class are weighted with the given factor. The background samples are weighted by 1 - ClassWeights.
Typically, this is set to 1.0/(Number of samples of the class). Note, the length of this array has to be either 1, then
its broadcasted to the number of classes, or it has to correspond to the number of classes. The default value []
corresponds to a factor of 0.5 for all classes. Note, if the number of classes are changed on a network then the
number of class specific weights are also adapted and reset with the default value 0.5 for each class.
The parameter Type sets the focal loss options:

’focal_binary’: Focal loss.


’sigmoid_focal_binary’: Focal loss fused with sigmoid.

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_loss_focal can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 691

Layer Parameters set get


’focal_type’ (Type) x
’gamma’ (Gamma) x x
’input_layer’ (DLLayerInput, x
DLLayerTarget, DLLayerWeights, and/or
DLLayerNormalization)
’loss_weight’ (LossWeight) x x
’name’ (LayerName) x x
’output_layer’ (DLLayerLossFocal) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. DLLayerNormalization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Normalization layer.
Default: []
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Overall loss weight if there are multiple losses in the network.
Default: 1.0
. Gamma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Exponent of the focal factor.
Default: 2.0
. ClassWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Class specific weight.
Default: []
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Focal loss type.
Default: ’focal_binary’
List of values: Type ∈ {’focal_binary’, ’sigmoid_focal_binary’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLossFocal (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Focal loss layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).

HALCON 24.11.1.0
692 CHAPTER 9 DEEP LEARNING

• Multithreading scope: global (may be called from any thread).


• Processed without parallelization.
References
T. Lin, P. Goyal, R. Girshick, K. He and P. Dollar, "Focal Loss for Dense Object Detection," in IEEE Trans-
actions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 318-327, 1 Feb. 2020, doi:
10.1109/TPAMI.2018.2858826.
Module
Deep Learning Professional

create_dl_layer_loss_huber ( : : DLLayerInput, DLLayerTarget,


DLLayerWeights, DLLayerNormalization, LayerName, LossWeight, Beta,
GenParamName, GenParamValue : DLLayerLossHuber )

Create a Huber loss layer.


The operator create_dl_layer_loss_huber creates a Huber loss layer whose handle is returned in
DLLayerLossHuber. The Huber loss is defined by

N −1
α X
LHuber (x, t, w, n) := wi l(xi − ti ), with
n i=0
0.5y 2 /β

if |y| < β
l(y) :=
|y| − 0.5β else.

This layer expects multiple layers as input:

• DLLayerInput: Specifies x (usually a softmax layer).


• DLLayerTarget: Specifies the targets t.
• DLLayerWeights: Specifies the weights w.

The underlying data tensors are assumed to be of the same shape with a total number of N elements.
The parameter DLLayerNormalization can be used to determine the normalization factor n. If
DLLayerNormalization is set to an empty tuple, the sum over all weights is used for the normalization
n.
The parameter LossWeight determines the scalar weight factor α.
The parameter Beta sets the value for β in the formula. If Beta is set to 0, the Huber loss is equal to an L1-loss.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_loss_huber can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 693

Layer Parameters set get


’beta’ (Beta) x x
’input_layer’ (DLLayerInput, x
DLLayerTarget, DLLayerWeights, and/or
DLLayerNormalization)
’loss_weight’ (LossWeight) x x
’name’ (LayerName) x x
’output_layer’ (DLLayerLossHuber) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. DLLayerNormalization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Normalization layer.
Default: []
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Scalar weight factor.
Default: 1.0
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Beta value in the loss-defining formula.
Default: 1.1
Restriction: Beta >= 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLossHuber (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Huber loss layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Module
Deep Learning Professional

HALCON 24.11.1.0
694 CHAPTER 9 DEEP LEARNING

create_dl_layer_lrn ( : : DLLayerInput, LayerName, LocalSize,


Alpha, Beta, K, NormRegion, GenParamName,
GenParamValue : DLLayerLRN )

Create a LRN layer.


The operator create_dl_layer_lrn creates a local response normalization layer which performs normal-
ization over a local window and whose handle is returned in DLLayerLRN. Currently, for NormRegion only
’across_channels’ can be set, which results in a normalization across the channel dimension. More detailed, a
value xc located in a channel with index c is normalized with a scale factor depending on a local window,

 −Beta
min(N −1,c+n/2)
Alpha X
LRN (xc ) = xc · K + x2c0  ,
n
c0 =max(0,c−n/2)

where n is the size of the local window given by LocalSize, N is the total number of channels, Alpha is the
scaling parameter (used as a normalization constant), Beta is the exponent used as a contrast constant, and K is a
constant summand, which is used to avoid any singularities.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_lrn can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’alpha’ (Alpha) x
’beta’ (Beta) x
’input_layer’ (DLLayerInput) x
’k’ (K) x
’local_size’ (LocalSize) x
’name’ (LayerName) x x
’norm_region’ (NormRegion) x
’output_layer’ (DLLayerLRN) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 695

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LocalSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Size of the local window.
Default: 5
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Scaling factor in the LRN formula.
Default: 0.0001
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Exponent in the LRN formula.
Default: 0.75
. K (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Constant summand in the LRN formula.
Default: 1.0
. NormRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Normalization dimension.
Default: ’across_channels’
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLRN (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
LRN layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_matmul ( : : DLLayerA, DLLayerB, LayerName,


GenParamName, GenParamValue : DLLayerMatMul )

Create a MatMul layer.


The operator create_dl_layer_matmul creates a MatMul layer whose handle is returned in
DLLayerMatMul.
A MatMul layer multiplies the 2D matrices, given in the latter two dimensions (H, W) of input DLLayerA, with
the corresponding 2D matrices of input DLLayerB, also given in the latter two dimensions (H, W). The output in
DLLayerMatMul is hence given by C = A · B.
The MatMul layer supports broadcasting for the first input DLLayerA. That means, if the batch size or the number
of channels in DLLayerA equals one then the first batch item or channel of DLLayerA is multiplied with all batch
items or channels of DLLayerB, respectively.
To make the multiplication work, the width of DLLayerA must be equal to the height of DLLayerB.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

HALCON 24.11.1.0
696 CHAPTER 9 DEEP LEARNING

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’num_trainable_params’: Number of trainable parameters (weights and biases) of the layer.
’transpose_a’: Matrices of input DLLayerA are transposed: C = AT · B.
Default: ’false’
’transpose_b’: Matrices of input DLLayerB are transposed: C = A · B T .
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_matmul can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’input_layer’ x
’name’ (LayerName) x x
’output_layer’ (DLLayerMatMul) x
’shape’ x
’transpose_a’ x
’transpose_b’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerA (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer A.
. DLLayerB (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer B.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’, ’num_trainable_params’, ’transpose_a’,
’transpose_b’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerMatMul (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
MatMul layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 697

Module
Deep Learning Professional

create_dl_layer_permutation ( : : DLLayerInput, LayerName,


Permutation, GenParamName, GenParamValue : DLLayerPermutation )

Create a permutation layer.


The operator create_dl_layer_permutation creates a permutation layer whose handle is returned in
DLLayerPermutation.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Permutation determines the new order of the axes of DLLayerInput, to which the input axes
should be permuted.
Permutation has the form [index width, index height, index depth, index batch], where the
indices are corresponding to the dimensions of the input. For example, [0, 1, 3, 2] leads to swapping the depth and
the batch axes. Therefore, each index must be unique and be taken from the set 0, 1, 2, 3.
Using a CPU device, for some values of Permutation the internal code can not be optimized which can lead to
an increased runtime. In this case, the layer parameter ’fall_back_to_baseline’ is set to ’true’.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_permutation


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’fall_back_to_baseline’ x
’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’permutation’ (Permutation) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Permutation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Order of the permuted axes.
Default: [0,1,2,3]

HALCON 24.11.1.0
698 CHAPTER 9 DEEP LEARNING

. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string


Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerPermutation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Permutation layer.
Example

* Swap the batch and depth axes with a permutation layer.


create_dl_layer_input ('input_a', [1, 1, 4], ['input_type', 'const_val'], \
['constant', 1.0], DLLayerInputA)
create_dl_layer_input ('input_b', [1, 1, 4], ['input_type', 'const_val'], \
['constant', 2.0], DLLayerInputB)
create_dl_layer_concat ([DLLayerInputA, DLLayerInputB], 'concat', 'batch', \
[], [], DLLayerConcat)
create_dl_layer_permutation (DLLayerConcat, 'permute', [0,1,3,2], \
[], [], DLLayerPermute)
create_dl_layer_depth_max (DLLayerPermute, 'depth_max', 'value', \
[], [], _, DLLayerDepthMaxValue)
create_dl_model (DLLayerDepthMaxValue, DLModel)
* The expected output values in DLResultBatch.depth_max are [2.0,2.0,2.0,2.0]
query_available_dl_devices (['runtime'], ['cpu'], DLDeviceHandles)
set_dl_model_param (DLModel, 'device', DLDeviceHandles[0])
apply_dl_model (DLModel, dict{}, [], DLResultBatch)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_dl_layer_input, create_dl_layer_concat, create_dl_layer_reshape
Possible Successors
create_dl_layer_convolution, create_dl_layer_dense, create_dl_layer_reshape
See also
create_dl_layer_reshape
Module
Deep Learning Professional

create_dl_layer_pooling ( : : DLLayerInput, LayerName,


KernelSize, Stride, Padding, Mode, GenParamName,
GenParamValue : DLLayerPooling )

Create a pooling layer.


The operator create_dl_layer_pooling creates a pooling layer whose handle is returned in
DLLayerPooling.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 699

The parameter KernelSize specifies the filter kernel in the dimensions width and height.
The parameter Stride specifies how the filter is shifted.
The values for KernelSize and Stride can be set as

• a single value which is used for both dimensions


• a tuple [width, height] and [column, row], respectively.

The parameter Padding determines the padding, thus how many pixels with value 0 are appended on the border
of the processed input image. Supported values are:

• ’half_kernel_size’: The number of appended pixels depends on the specifies KernelSize. More pre-
cisely, it is calculated as bKernelSize/2c, where for the padding on the left / right border the value of
KernelSize in dimension width is regarded and for the padding on the upper / lower border the value of
KernelSize in height.
• ’implicit’: No pixels are appended on the left or on the top of the input image. The number of pixels appended
on the right or lower border of the input image is Stride − (input_dim − KernelSize)%Stride, or
zero if the kernel size is a divisor of the input dimension. input_dim stands for the input width or height.
• ’none’: No pixels are appended.
• Number of pixels: Specify the number of pixels appended on each border. To do so, the following tuple
lengths are supported:
– Single number: Padding in all four directions left/right/top/bottom.
– Two numbers: Padding in left/right and top/bottom: [l/r, t/b].
– Four numbers: Padding on left, right, top, bottom side: [l,r,t,b].
Restriction: ’runtime’ ’gpu’ does not support asymmetric padding, i.e., the padding values for the left
and right side must be equal, as well as the padding values for the top and bottom side.
Restriction: The integer padding values must be smaller than the value set for KernelSize in the corre-
sponding dimension.

The output dimensions of the pooling layer are given by

 
input_dim + padding_begin + padding_end − KernelSize
output_dim = +1
Stride

Thereby we use the following values: output_dim: output width, input_dim: input width, padding_begin:
number of pixels added to the left/top of the input image, and padding_end: number of pixels added to the
right/bottom of the input image.
The parameter Mode specifies the mode of the pooling operation. Supported modes are:

’average’: The resulting pixel value is the average of all pixel values in the filter.
’maximum’: The resulting pixel value is the maximum of all pixel values in the filter.
’global_average’: Same as mode ’average’, but without the knowledge of the spatial dimensions of the input, it is
possible to define the desired output dimensions via the parameter KernelSize. E.g., if the average over
all pixel values of the input shall be returned, set the KernelSize to 1 and the output width and height
is equal to 1. The internally used kernel size and stride are calculated as follows:
• If KernelSize is a divisor of the input dimensions: The internally used kernel size and stride are both
set to the value input_dim/KernelSize.
• If KernelSize is not a divisor of the input dimension: The calculation of the internally used kernel
size and stride depend on the generic parameter ’global_pooling_mode’:
’overlapping’: The internally used stride is set to binput_dim/KernelSizec. The internally used
kernel size is then computed as input_dim−(KernelSize−1)·stride. This leads to overlapping
kernels but the whole input image is taken into account for the computation of the output.
’non_overlapping’: The internally used kernel size and stride are set to the same value
binput_dim/KernelSizec. This leads to non-overlapping pooling kernels, but parts of the input
image at the right or bottom border might not be considered when computing the output. In this
mode, due to rounding the output size is not always equal to the size given by KernelSize.

HALCON 24.11.1.0
700 CHAPTER 9 DEEP LEARNING

’adaptive’: In this mode, for each pixel (k, l) of the output, the size of the corresponding pooling area
within the input is computed adaptively, where k are the row and l are the column indices of the
output. The row indices of the pooling area for pixels of the k-th output row are given by [bk ·
input_dim/KernelSizec, d(k + 1) · input_dim/KernelSizee), where in this case the height
of the KernelSize is used. The computation of the column coordinates is done analogously. This
means that neighboring pooling areas can have a different size which can lead to a less efficient
implementation. However, the pooling areas are only overlapping by one pixel which is generally
less overlap than for ’global_pooling_mode’ ’overlapping’. The whole input image is taken into
account for the computation of the output. For this mode, the parameter Padding must be set to
’none’.
For this mode the parameter Stride is ignored and calculated internally as described above.
’global_maximum’: Same as mode ’global_average’, but the maximum is calculated instead of the average.

For more information about the pooling layer see the “Solution Guide on Classification”.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’global_pooling_mode’: Mode for calculation of the internally used kernel size and stride in case of global pooling
(Mode ’global_average’ or ’global_maximum’). See description above. In case of a non-global pooling the
parameter is set to the value ’undefined’.
Default: ’overlapping’
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_pooling can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’global’ x
’global_pooling_mode’ x
’input_layer’ (DLLayerInput) x
’kernel_size’ (KernelSize) x
’name’ (LayerName) x x
’output_layer’ (DLLayerPooling) x
’padding’ (Padding) x
’padding_type’ (Padding) x
’pooling_mode’ (Mode) x
’shape’ x
’stride’ (Stride) x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 701

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. KernelSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Width and height of the filter kernels.
Default: [2,2]
. Stride (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Bi-dimensional amount of filter shift.
Default: [2,2]
. Padding (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string / integer
Padding type or specific padding size.
Default: ’none’
Suggested values: Padding ∈ {’none’, ’half_kernel_size’, ’implicit’}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string
Mode of pooling operation.
Default: ’maximum’
List of values: Mode ∈ {’maximum’, ’average’, ’global_maximum’, ’global_average’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’global_pooling_mode’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’adaptive’, ’non_overlapping’, ’overlapping’, ’true’, ’false’, 1.0,
0.9, 0.0}
. DLLayerPooling (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Pooling layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_reduce ( : : DLLayerInput, LayerName, Operation,


Axes, GenParamName, GenParamValue : DLLayerReduce )

Create a reduce layer.


The operator create_dl_layer_reduce creates a reduce layer whose handle is returned in
DLLayerReduce.
A reduce layer applies a given operation to the input data tensor to reduce it along one or multiple axes to a single
value. Hence, the output tensor has the same shape as the input tensor, but at the axes given by Axes the dimension
equals one.
The parameter DLLayerInput determines the feeding input layer. This layer expects a single layer as input.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Operation specifies the operation that is applied. The operation is applied to the values along the
axes given by Axes of the input tensor and the result is written to the corresponding position in the output tensor.
The supported values for Operation are:

HALCON 24.11.1.0
702 CHAPTER 9 DEEP LEARNING

• ’norm_l2’: Computes the L2 norm of the input values.


• ’sum’: Computes the sum of the input values.

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’div_eps’: Small scalar value that is used to stabilize the training. I.e., in case of a division, the value is added to
the denominator to prevent a division by zero.
Default: 1e-10

Certain parameters of layers created using this operator create_dl_layer_reduce can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer-Parameter set get


’axes’ (Axes) x
’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’operation’ (Operation) x
’output_layer’ (DLLayerReduce) x
’shape’ x
’type’ x

Generische Layer-Parameter set get


’is_inference_output’ x x
’num_trainable_params’ x
’div_eps’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding input layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Reduce operation.
Default: ’norm_l2’
List of values: Operation ∈ {’norm_l2’, ’sum’}
. Axes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Axes to which the reduce operation is applied.
Default: [2,3]
List of values: Axes ∈ {1, 2, 3, ’width’, ’height’, ’depth’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’div_eps’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {1e-10, ’true’, ’false’}
. DLLayerReduce (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Reduce layer.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 703

Example

* Minimal example for reduce-layer.


create_dl_layer_input ('input', [64, 32, 10], [], [], DLLayerInput)
create_dl_layer_reduce (DLLayerInput, 'reduce_width', 'sum', 'width', [], [], \
DLLayerReduceWidth)
create_dl_layer_reduce (DLLayerReduceWidth, 'reduce_height_depth', 'norm_l2', [1,2], []
[], DLLayerReduceHeightDepth)
* Create a model and change the batch-size.
create_dl_model (DLLayerReduceHeightDepth, DLModel)
set_dl_model_param (DLModel, 'batch_size', 2)
get_dl_model_layer_param (DLModel, 'reduce_height_depth', 'shape', ShapeReduceHeightWidt

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_dl_layer_input
Possible Successors
create_dl_model
Module
Deep Learning Professional

create_dl_layer_reshape ( : : DLLayerInput, LayerName, Shape,


GenParamName, GenParamValue : DLLayerReshape )

Create a reshape layer.


The operator create_dl_layer_reshape creates a reshape layer whose handle is returned in
DLLayerReshape.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Shape determines the output shape, into which the input data is converted.
The value of Shape has to be given in the form [width, height, depth, batch_size], where the
fourth value for the batch size is optional (see below). The overall size of the data has to remain constant,
i.e., width_out * height_out * depth_out * batch_size_out = width_in * height_in *
depth_in * batch_size_in.
The following options are available for setting the values of Shape:

• Setting a value for each of the four dimensions,


• One or several values are set to 0 in order to keep the value of the input dimension,
• By setting a maximum of one value to -1, this value will be determined automatically. It will be calculated
in a way that the overall size remains constant. Note that this is only possible if the computed value is an
integer.

For a model that was created using create_dl_model the model’s batch size should always be settable with
set_dl_model_param. Hence, either the output batch size of the reshape layer equals the batch size of the
model (batch size in Shape set to 0), or at least one reshape dimension should be calculated automatically (one
value in Shape set to -1).

HALCON 24.11.1.0
704 CHAPTER 9 DEEP LEARNING

If the batch size is specified and it is not set to 0, at least one dimension of Shape must be set to -1. This is nec-
essary, because for a model created with create_dl_model, the model’s batch size should always be settable
with set_dl_model_param. Hence, either the output batch size of the reshape layer equals the batch size of
the model (batch size in Shape set to 0), or at least one reshape dimension should be calculated automatically
(one value in Shape set to -1). In case the batch size is not specified it is set to 0, which leads to an output batch
size equal to the input one.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_reshape can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’output_depth’ (Shape) x
’output_height’ (Shape) x
’output_layer’ (Shape) x
’output_width’ (Shape) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters

. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle


Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Shape of the output graph layer data.
Default: [224,224,3]
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerReshape (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Reshape layer.
Example

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 705

* Minimal example for reshape-layer.


create_dl_layer_input ('input', [64, 32, 10], [], [], DLLayerInput)
create_dl_layer_reshape (DLLayerInput, 'reshape_wh', [32, 64, 0], [], [], \
DLLayerReshapeWH)
create_dl_layer_reshape (DLLayerInput, 'reshape_bs', [64, 32, 1, -1], [], \
[], DLLayerReshapeBS)
* DLLayerReshapeBS has batch size 10 and depth 1.
get_dl_layer_param (DLLayerReshapeBS, 'shape', ShapeReshapeBS)
* Create a model and change the batch-size.
create_dl_model (DLLayerReshapeBS, DLModel)
set_dl_model_param (DLModel, 'batch_size', 2)
* DLLayerReshapeBS has batch size 20 now.
get_dl_model_layer_param (DLModel, 'reshape_bs', 'shape', ShapeReshapeBS)

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_dl_layer_input, create_dl_layer_concat
Possible Successors
create_dl_layer_convolution, create_dl_layer_dense
Module
Deep Learning Professional

create_dl_layer_softmax ( : : DLLayerInput, LayerName,


GenParamName, GenParamValue : DLLayerSoftMax )

Create a softmax layer.


The operator create_dl_layer_softmax creates a softmax layer whose handle is returned in
DLLayerSoftMax.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The softmax layer applies the softmax function which is defined for each input xi as follows:

exp(xi )
Sof tmax(xi ) = PN −1
j=0 exp(xj )

where N is the number of inputs. During training, the result of the softmax function is transformed by a logarithm
function, such that the values are suitable as input to e.g., a cross entropy loss layer. This behavior can be changed
by setting the generic parameter ’output_mode’, see below.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’output_mode’: This parameter determines if and in which case the output is transformed by a logarithm function:
• ’default’: During inference, the result of the softmax function is returned as output while during training,
the softmax is further transformed by a logarithm function.
• ’no_log_training’: During training the result of the softmax function is not transformed by a logarithm
function.

HALCON 24.11.1.0
706 CHAPTER 9 DEEP LEARNING

• ’log_inference’: The logarithm of the softmax is calculated during inference in the same way as during
training.
Default: ’default’.
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_softmax can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

Layer Parameters set get


’input_layer’ (DLLayerInput) x
’name’ (LayerName) x x
’output_layer’ (DLLayerSoftMax) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x
’output_mode’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’output_mode’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’default’, ’no_log_training’, ’log_inference’, ’true’, ’false’}
. DLLayerSoftMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Softmax layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

create_dl_layer_transposed_convolution ( : : DLLayerInput,
LayerName, KernelSize, Stride, KernelDepth, Groups, Padding,
GenParamName, GenParamValue : DLLayerTransposedConvolution )

Create a transposed convolution layer.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 707

The operator create_dl_layer_transposed_convolution creates a transposed convolution layer


whose handle is returned in DLLayerTransposedConvolution.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter KernelSize specifies the filter kernel in the dimensions width and height. So far, only
quadratic kernels are supported.
Restriction: This value must be a tuple of length 1.
The parameter Stride determines how the filter is shifted in row and column direction.
Restriction: This value must be a tuple of length 1.
The parameter KernelDepth defines the depth of the output feature maps.
Restriction: This value must be a tuple of length 1.
The parameter Groups determines the amount of filter groups. So far, only a single filter group is supported.
Restriction: This value must be a tuple of length 1.
The parameter Padding effectively appends KernelSize−1−Padding pixels with value 0 to each border of
the input. This is set so that a convolutional layer and a transposed convolution layer with the same KernelSize,
Stride and Padding values are inverses of each other regarding their input and output shapes. Supported
Padding values are:

• ’half_kernel_size’: The integer value of Padding in the formula above depends on the specified
KernelSize. More precisely, it is calculated as bKernelSize/2c.
• ’none’: The value of Padding in the formula above is 0.
• Number of pixels: Specify the integer value of Padding in the formula above for each border. To do so, the
following tuple lengths are supported:
– Single number: Padding value for all four directions left/right/top/bottom.
– Two numbers: Padding value for left/right and top/bottom: [l/r, t/b].
– Four numbers: Padding value for left, right, top, bottom side: [l,r,t,b].
Restriction: ’runtime’ ’gpu’ does not support asymmetric padding, i.e., the padding values for the left
and right side must be equal, as well as the padding values for the top and bottom side.
Restriction: The integer padding values must be smaller than the value set for KernelSize.

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’bias_term’: Determines, whether the layer has bias terms.


Default: ’true’
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’learning_rate_multiplier’: Learning rate multiplier for this layer that is used during training. If ’learn-
ing_rate_multiplier’ is set to 0.0, the layer is skipped during training.
Default: 1.0
’output_padding’: Can be used to resolve ambiguities in the output shape for KernelSize and Stride larger
than 1. As mentioned above in the description of the parameter Padding, with respect to the input and
output shapes a transposed convolution layer can be seen as the inverse of a convolution layer if they have
the same settings for KernelSize, Stride, and Padding (and dilation). However, a convolution layer
can map several shapes to the same output spatial dimensions. E.g., for KernelSize 3, Padding 1, and
Stride 2, both inputs with spatial dimensions (H, W) = ’(4, 4)’ and (H, W) = ’(3, 3)’ are mapped to an
output shape with (H, W) = ’(2, 2)’. To get back to ’(4, 4)’ using a transposed convolution (with the same
settings for KernelSize, Stride, and Padding) ’output_padding’ must be set to 1. ’output_padding’
must not be larger than the bottom and right Padding and can only be set larger than 0 if Stride is larger
than 1.
Default: 0

HALCON 24.11.1.0
708 CHAPTER 9 DEEP LEARNING

’weight_filler’: Defines the mode how the weights are initialized. See create_dl_layer_convolution
for a detailed explanation of this parameter and its values.
List of values: ’xavier’, ’msra’, ’const’
Default: ’xavier’
’weight_filler_const_val’: See create_dl_layer_convolution for a detailed explanation of this parame-
ter and its values.
Default: 0.5
’weight_filler_variance_norm’: Value range for ’weight_filler’. See create_dl_layer_convolution for
a detailed explanation of this parameter and its values.
List of values: ’norm_average’, ’norm_in’, ’norm_out’, constant value (in combination with ’weight_filler’
= ’msra’)
Default: ’norm_in’

Certain parameters of layers created using this operator create_dl_layer_transposed_convolution


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’groups’ (Groups) x
’input_depth’ x
’input_layer’ (DLLayerInput) x
’kernel_depth’ (KernelDepth) x
’kernel_size’ (KernelSize) x
’name’ (LayerName) x x
’output_layer’ (DLLayerTransposedConvolution) x
’padding_type’ (Padding) x
’shape’ x
’stride’ (Stride) x
’type’ x

Generic Layer Parameters set get


’bias_term’ x
’is_inference_output’ x x
’learning_rate_multiplier’ x x
’num_trainable_params’ x
’output_padding’ x
’weight_filler’ x x
’weight_filler_const_val’ x x
’weight_filler_variance_norm’ x x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. KernelSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Width and height of the filter kernels.
Default: 3
. Stride (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Amount of filter shift.
Default: 1

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 709

. KernelDepth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer


Depth of filter kernels.
Default: 64
. Groups (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of filter groups.
Default: 1
. Padding (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string / integer
Type of the padding.
Default: ’none’
List of values: Padding ∈ {’none’, ’half_kernel_size’, [all], [width,height], [left,right,top,bottom]}
Suggested values: Padding ∈ {’none’, ’half_kernel_size’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’bias_term’, ’is_inference_output’, ’learning_rate_multiplier’,
’output_padding’, ’weight_filler’, ’weight_filler_const_val’, ’weight_filler_variance_norm’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’xavier’, ’msra’, ’const’, ’norm_in’, ’norm_out’, ’norm_average’,
’true’, ’false’}
. DLLayerTransposedConvolution (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Transposed convolutional layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.

Module
Deep Learning Professional

create_dl_layer_zoom_factor ( : : DLLayerInput, LayerName,


ScaleWidth, ScaleHeight, Interpolation, AlignCorners,
GenParamName, GenParamValue : DLLayerZoom )

Create a zoom layer using size factors.


The operator create_dl_layer_zoom_factor creates a zoom layer using size factors and returns the layer
handle in DLLayerZoom.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameters ScaleWidth and ScaleHeight specify the ratio between the output and the corresponding
input dimension. Together they define the output size of the zoom layer DLLayerZoom.
The parameter Interpolation defines the interpolation mode. Currently only the mode ’bilinear’ is supported.
The parameter AlignCorners defines how coordinates are transformed from the output to the input image:

’true’: The transformation is applied in the HALCON Non-Standard Cartesian coordinate system (edge-centered,
with the origin in the upper left corner, see chapter Transformations / 2D Transformations). Using the x axis
as an example, this leads to:

xinput = xoutput ∗ (lengthinput − 1)/(lengthoutput − 1)

HALCON 24.11.1.0
710 CHAPTER 9 DEEP LEARNING

’false’: The transformation is applied in the HALCON standard coordinate system (pixel centered, with the origin
in the center of the upper left pixel, see chapter Transformations / 2D Transformations). Using the x axis as
an example, this leads to:

xinput = (xoutput + 0.5) ∗ lengthinput /lengthoutput − 0.5

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_zoom_factor


can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’align_corners’ (AlignCorners) x x
’input_layer’ (DLLayerInput) x
’interpolation_mode’ (Interpolation) x
’name’ (LayerName) x x
’output_layer’ (DLLayerZoom) x
’scale_params’ (ScaleWidth and ScaleHeight) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ScaleWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Ratio output/input width of the layer.
Default: 2.0
. ScaleHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Ratio output/input height of the layer.
Default: 2.0
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of interpolation.
Default: ’bilinear’
List of values: Interpolation ∈ {’bilinear’}
. AlignCorners (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of coordinate transformation between output/input images.
Default: ’false’
List of values: AlignCorners ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 711

. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real


Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerZoom (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Zoom layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Alternatives
create_dl_layer_zoom_size, create_dl_layer_zoom_to_layer_size
Module
Deep Learning Professional

create_dl_layer_zoom_size ( : : DLLayerInput, LayerName, Width,


Height, Interpolation, AlignCorners, GenParamName,
GenParamValue : DLLayerZoom )

Create a zoom layer using an absolute output size.


The operator create_dl_layer_zoom_size creates a zoom layer using an absolute output size and returns
the layer handle in DLLayerZoom.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameters Width and Height define the absolute output size of the zoom layer DLLayerZoom.
The parameter Interpolation defines the interpolation mode. Currently only the mode ’bilinear’ is supported.
The parameter AlignCorners defines how coordinates are transformed from the output to the input image:

’true’: The transformation is applied in the HALCON Non-Standard Cartesian coordinate system (edge-centered,
with the origin in the upper left corner, see chapter Transformations / 2D Transformations). Using the x axis
as an example, this leads to:

xinput = xoutput ∗ (lengthinput − 1)/(lengthoutput − 1)

’false’: The transformation is applied in the HALCON standard coordinate system (pixel centered, with the origin
in the center of the upper left pixel, see chapter Transformations / 2D Transformations). Using the x axis as
an example, this leads to:

xinput = (xoutput + 0.5) ∗ lengthinput /lengthoutput − 0.5

The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:

’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’

Certain parameters of layers created using this operator create_dl_layer_zoom_size can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.

HALCON 24.11.1.0
712 CHAPTER 9 DEEP LEARNING

Layer Parameters set get


’align_corners’ (AlignCorners) x x
’input_layer’ (DLLayerInput) x
’interpolation_mode’ (Interpolation) x
’name’ (LayerName) x x
’output_layer’ (DLLayerZoom) x
’scale_params’ (Width and Height) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Absolute width of the output layer.
Default: 100
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Absolute height of the output layer.
Default: 100
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of interpolation.
Default: ’bilinear’
List of values: Interpolation ∈ {’bilinear’}
. AlignCorners (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of coordinate transformation between output/input images.
Default: ’false’
List of values: AlignCorners ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerZoom (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Zoom layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Alternatives
create_dl_layer_zoom_factor, create_dl_layer_zoom_to_layer_size
Module
Deep Learning Professional

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 713

create_dl_layer_zoom_to_layer_size ( : : DLLayerInput,
DLLayerReference, LayerName, Interpolation, AlignCorners,
GenParamName, GenParamValue : DLLayerZoom )

Create a zoom layer using the output size of a reference layer.


The operator create_dl_layer_zoom_to_layer_size creates a zoom layer using the output size of a
reference layer and returns the layer handle in DLLayerZoom.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter DLLayerReference is used to define the output size of the zoom layer DLLayerZoom: the
size is adapted to the output size of DLLayerReference.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Interpolation defines the interpolation mode. Currently only the mode ’bilinear’ is supported.
The parameter AlignCorners defines how coordinates are transformed from the output to the input image:
’true’: The transformation is applied in the HALCON Non-Standard Cartesian coordinate system (edge-centered,
with the origin in the upper left corner, see chapter Transformations / 2D Transformations). Using the x axis
as an example, this leads to:

xinput = xoutput ∗ (lengthinput − 1)/(lengthoutput − 1)


’false’: The transformation is applied in the HALCON standard coordinate system (pixel centered, with the origin
in the center of the upper left pixel, see chapter Transformations / 2D Transformations). Using the x axis as
an example, this leads to:

xinput = (xoutput + 0.5) ∗ lengthinput /lengthoutput − 0.5


The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_zoom_to_layer_size
can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.

Layer Parameters set get


’align_corners’ (AlignCorners) x x
’input_layer’ (DLLayerInput) x
’interpolation_mode’ (Interpolation) x
’name’ (LayerName) x x
’output_layer’ (DLLayerZoom) x
’scale_params’ (DLLayerReference) x
’shape’ x
’type’ x

Generic Layer Parameters set get


’is_inference_output’ x x
’num_trainable_params’ x

HALCON 24.11.1.0
714 CHAPTER 9 DEEP LEARNING

Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. DLLayerReference (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Reference layer to define the output size.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of interpolation.
Default: ’bilinear’
List of values: Interpolation ∈ {’bilinear’}
. AlignCorners (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of coordinate transformation between output/input images.
Default: ’false’
List of values: AlignCorners ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerZoom (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Zoom layer.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Alternatives
create_dl_layer_zoom_size, create_dl_layer_zoom_factor
Module
Deep Learning Professional

create_dl_model ( : : OutputLayers : DLModelHandle )

Create a deep learning model.


The operator create_dl_model creates a deep learning model from a graph and returns its handle in
DLModelHandle.
A deep learning model in HALCON mainly consists of a directed acyclic graph that defines the networks archi-
tecture. Further components of a deep learning model in HALCON are parameters as ’class_names’, ’class_ids’,
and many others, or hyperparameters that are needed to train a model, as for example the ’learning_rate’. While
parameters and hyperparameters can be set after creation of the model using set_dl_model_param, the model
itself can only be created using create_dl_model if its network architecture is given in form of a graph.
To build a graph that defines the models network architecture, one needs to put together the networks layers. In
general, a graph starts with an input layer. A subsequent layer that follows after the input layer uses the input
layer as feeding layer, and the new layer itself might be used as a feeding layer for the next layer, and so on.
This is repeated until the graphs output layers (e.g., softmax or loss layers) are appended to the graph. To create
a layer, use its specified creation operator, e.g., an input layer is created using create_dl_layer_input, a
convolution layer is created using create_dl_layer_convolution, and so on.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 715

When the graph is defined, a model can be created using create_dl_model by passing over the graphs output
layer handles in OutputLayers. Note that the output layer handles save all other layers that directly or indirectly
serve as feeding input layers for the output layers during their creation. This means that the output layer handles
keep the whole network architecture necessary for the creation of the model using create_dl_model.
The type of the created model, hence the task the model is designed for (classification, object detection, segmen-
tation), is only given by the networks architecture. However, if the networks architecture allows it, the type of the
model, ’type’, can be set using set_dl_model_param. A specified model type allows a more user friendly
usage in the HALCON deep learning workflow. Supported types are:

’generic’: This is the default model type. The task the model’s neuronal network can solve is defined by its
architecture. When apply_dl_model is applied for inference, the operator returns the activations of the
output layers. To train the model using train_dl_model_batch, the underlying graph requires loss
layers.
’classification’: The model is specified for classification and all layers required for training the model are adapted
to the model. When apply_dl_model is applied for inference, the output is adapted according to the type,
see apply_dl_model for more details. See Deep Learning / Classification for further information.
In addition, the operator gen_dl_model_heatmap can be used to display the models heatmap.
’detection’: The model is specified for object detection and instance segmentation and all layers and anchors
required for training the model are adapted to the model. When apply_dl_model is applied for inference,
the output is adapted according to the type, see apply_dl_model for more details. See Deep Learning /
Object Detection and Instance Segmentation for further information.
’multi_label_classification’: The model is specified for multi-label classification and all layers required for train-
ing the model are adapted to the model. When apply_dl_model is applied for inference, the output is
adapted according to the type, see apply_dl_model for more details. See Deep Learning / Multi-Label
Classification for further information.
’segmentation’: The model is specified for semantic segmentation or edge extraction respectively and all layers
required for training the model are adapted to the model. When apply_dl_model is applied for inference,
the output is adapted according to the type, see apply_dl_model for more details. See Deep Learning /
Semantic Segmentation and Edge Extraction for further information.

Furthermore, many deep learning procedures provide more functionality for the model if its type is set. As an
example, dev_display_dl_data can be used to display the inferred results more nicely.
Note that setting a model type requires that the graph fulfills certain structure conditions. We recommend to follow
the architecture of our delivered neuronal networks if the model type should be set to one of these types.
Parameters
. OutputLayers (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Output layers of the graph.
. DLModelHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .dl_model ; handle
Handle of the deep learning model.
Result
If the parameters are valid, the operator create_dl_model returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_dl_layer_softmax, create_dl_layer_loss_cross_entropy,
create_dl_layer_loss_focal, create_dl_layer_loss_huber
Possible Successors
set_dl_model_param

HALCON 24.11.1.0
716 CHAPTER 9 DEEP LEARNING

Module
Deep Learning Professional

get_dl_layer_param ( : : DLLayer, GenParamName : GenParamValue )

Return the parameters of a deep learning layer.


The operator get_dl_layer_param returns the parameter GenParamName of the deep learning layer
DLLayer in GenParamValue.
Depending on the type of the layer, different parameter names are valid. Which generic and layer-specific
parameters can be queried is described in the specific references of the operators used for layer creation
(create_dl_layer_*).
Parameters
. DLLayer (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Parameter to query.
Default: ’shape’
List of values: GenParamName ∈ {’input_layer’, ’name’, ’shape’, ’type’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . attribute.value(-array) ; real / integer / string / handle
Value of the queried parameter.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

get_dl_model_layer ( : : DLModelHandle, LayerNames : DLLayers )

Create a deep copy of the layers and all of their graph ancestors in a given deep learning model.
The operator get_dl_model_layer creates a deep copy of every layer named in LayerNames and all their
graph ancestors in the deep learning model DLModelHandle. You can retrieve the unique layer names using
get_dl_model_param with its option ’summary’.
You might use the output layers returned in DLLayers as inputs to the create_dl_layer_* and
create_dl_model operators in order to create novel model architectures based on existing models.
If you want to get multiple layers of a single model, these layers have to be specified as a LayerNames tuple in
a single call to get_dl_model_layer. Doing so, you avoid multiple deep copies of graph ancestors that are
potentially shared by the layers.
Example:

get_dl_model_layer(DLModelHandleOrig, [’layer_name_3’,
’layer_name_6’], DLLayersOutput)
create_dl_model([DLLayersOutput], DLModelHandle)

Please note, that the output layers are copies. They contain the same weights and settings as in the given input
model but they are unique copies. You cannot alter the existing model by changing the output layers.

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 717

Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Deep learning model.
. LayerNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the layers to be copied.
. DLLayers (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Copies of layers and all of their ancestors.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_dl_model
Possible Successors
create_dl_model, create_dl_layer_activation,
create_dl_layer_batch_normalization, create_dl_layer_class_id_conversion,
create_dl_layer_class_id_conversion, create_dl_layer_concat,
create_dl_layer_convolution, create_dl_layer_dense, create_dl_layer_depth_max,
create_dl_layer_dropout, create_dl_layer_elementwise,
create_dl_layer_loss_cross_entropy, create_dl_layer_loss_ctc,
create_dl_layer_loss_distance, create_dl_layer_loss_focal,
create_dl_layer_loss_huber, create_dl_layer_lrn, create_dl_layer_pooling,
create_dl_layer_reduce, create_dl_layer_reshape, create_dl_layer_softmax,
create_dl_layer_transposed_convolution, create_dl_layer_zoom_factor,
create_dl_layer_zoom_size, create_dl_layer_zoom_to_layer_size
Module
Deep Learning Professional

get_dl_model_layer_activations ( : Activations : DLModelHandle,


LayerName : )

Get the activations of a Deep Learning model layer.


The operator get_dl_model_layer_activations returns in Activations the activations of the speci-
fied LayerName of the model DLModelHandle.
Activations is a tuple of batch_size many objects, where every object is an image having the size (width,
height, depth) of the given LayerName.
Parameters

. Activations (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real


Output activations.
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the deep learning model.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the layer to be queried.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).

HALCON 24.11.1.0
718 CHAPTER 9 DEEP LEARNING

• Processed without parallelization.


Module
Deep Learning Professional

get_dl_model_layer_gradients ( : Gradients : DLModelHandle,


LayerName : )

Get the gradients of a Deep Learning model layer.


The operator get_dl_model_layer_gradients returns in Gradients the gradients of the specified
LayerName of the model DLModelHandle.
Gradients is a tuple of batch_size many objects, where every object is an image having the size (width,
height, depth) of the given LayerName.
Parameters

. Gradients (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real


Output gradients.
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the deep learning model.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the layer to be queried.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

get_dl_model_layer_param ( : : DLModelHandle, LayerName,


ParamName : ParamValue )

Retrieve parameter values for a given layer.


The operator get_dl_model_layer_param returns for a layer the value of the parameter ParamName in
ParamValue. The layer is referred by its name LayerName in the model DLModelHandle. You can retrieve
the layer names using get_dl_model_param with its option ’layer_names’ or ’summary’.
Which generic and layer-specific parameters can be queried is described in the specific operator references
(create_dl_layer_*).
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Deep learning model.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the queried parameter.
Default: ’type’
List of values: ParamName ∈ {’bias_filler’, ’bias_filler_variance_norm’, ’bias_filler_const_val’,
’bias_term’, ’input_layer’, ’is_inference_output’, ’leaky_relu_alpha’, ’learning_rate_multiplier’,
’learning_rate_multiplier_bias’, ’name’, ’num_trainable_params’, ’output_layer’, ’shape’, ’type’,
’upper_bound’, ’weight_filler’, ’weight_filler_const_val’, ’weight_filler_variance_norm’}
Restriction: length(ParamName) > 0

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 719

. ParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tuple(-array) ; string / real / integer


Value of the queried parameter.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
See also
get_dl_model_param, set_dl_model_layer_param
Module
Foundation. This operator uses dynamic licensing (see the ’Installation Guide’). Which of the following modules
is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Deep Learning Professional

get_dl_model_layer_weights ( : Weights : DLModelHandle, LayerName,


WeightsType : )

Get the weights (or values) of a Deep Learning model layer.


The operator get_dl_model_layer_weights returns in Weights the values of a LayerName of the
model DLModelHandle.
The parameter WeightsType determines which type of layer values are retrieved. The following values are
supported for WeightsType:

• ’batchnorm_mean’: Batch-wise calculated mean values to normalize the inputs. For further information,
please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’batchnorm_mean_avg’: Average of the batch-wise calculated mean values to normalize the inputs. For
further information, please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’batchnorm_variance’: Batch-wise calculated variance values to normalize the inputs. For further informa-
tion, please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’batchnorm_variance_avg’: Average of the batch-wise calculated variance values to normalize the inputs.
For further information, please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’bias’: Biases of the layer.
• ’bias_gradient’: Gradients of the biases of the layer.
• ’bias_gradient_norm_l2’: Gradients of the biases of the layer in terms of L2 norm.
• ’bias_norm_l2’: Biases of the layer in terms of L2 norm.
• ’bias_update’: Update of the biases of the layer. This is used in e.g., a solver which uses the last update.
• ’bias_update_norm_l2’: Update of the biases of the layer in terms of L2 norm. This is used in a solver which
uses the last update.
• ’weights’: Weights of the layer.
• ’weights_gradient’: Gradients of the weights of the layer.
• ’weights_gradient_norm_l2’: Gradients of the weights of the layer in terms of L2 norm.
• ’weights_norm_l2’: Weights of the layer in terms of L2 norm.
• ’weights_update’: Update of the weights of the layer. This is used in a solver which uses the last update.
• ’weights_update_norm_l2’: Update of the weights of the layer in terms of L2 norm. This is used in a solver
which uses the last update.

HALCON 24.11.1.0
720 CHAPTER 9 DEEP LEARNING

The following tables give an overview, which parameters for WeightsType can be
set using set_dl_model_layer_weights and which ones can be retrieved using
get_dl_model_layer_weights.

Layer Parameters set get


’batchnorm_mean’ x x
’batchnorm_mean_avg’ x x
’batchnorm_variance’ x x
’batchnorm_variance_avg’ x x
’bias’ x x
’bias_gradient’ x
’bias_gradient_norm_l2’ x
’bias_norm_l2’ x
’bias_update’ x
’bias_update_norm_l2’ x
’weights’ x x
’weights_gradient’ x
’weights_gradient_norm_l2’ x
’weights_norm_l2’ x
’weights_update’ x
’weights_update_norm_l2’ x

Attention
The operator get_dl_model_layer_weights is only applicable to self-created networks. For networks
delivered by HALCON, the operator returns an empty tuple.
Parameters
. Weights (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real
Output weights.
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the deep learning model.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the layer to be queried.
. WeightsType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selected type of layer values to be returned.
Default: ’weights’
List of values: WeightsType ∈ {’weights’, ’weights_norm_l2’, ’weights_update’,
’weights_update_norm_l2’, ’weights_gradient’, ’weights_gradient_norm_l2’, ’bias’, ’bias_norm_l2’,
’bias_update’, ’bias_update_norm_l2’, ’bias_gradient’, ’bias_gradient_norm_l2’, ’batchnorm_mean’,
’batchnorm_variance’, ’batchnorm_mean_avg’, ’batchnorm_variance_avg’}
Example

set_system ('seed_rand', 42)


* Create a small model network.
create_dl_layer_input ('input', [InputImageSize[0],InputImageSize[1],1], [],\
[], DLGraphNodeInput)
create_dl_layer_convolution (DLGraphNodeInput, 'conv', 3, 1, 1, 2, 1, 'none',\
'none', [], [], DLGraphNodeConvolution)
create_dl_layer_activation (DLGraphNodeConvolution, 'relu', 'relu', [], [],\
DLGraphNodeActivation)
create_dl_layer_dense (DLGraphNodeActivation, 'dense', 3, [], [],\
DLGraphNodeDense)
create_dl_layer_softmax (DLGraphNodeDense, 'softmax', [], [],\
DLGraphNodeSoftMax)
create_dl_model (DLGraphNodeSoftMax, DLModelHandle)

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 721

*
set_dl_model_param (DLModelHandle, 'type', 'classification')
set_dl_model_param (DLModelHandle, 'batch_size', 1)
set_dl_model_param (DLModelHandle, 'runtime', 'gpu')
set_dl_model_param (DLModelHandle, 'runtime_init', 'immediately')
*
* Train for 5 iterations.
for TrainIterations := 1 to NumTrainIterations by 1
train_dl_model_batch (DLModelHandle, DLSample, DLTrainResult)
endfor
*
* Get the gradients, weights, and activations.
get_dl_model_layer_gradients (GradientsSoftmax, DLModelHandle, 'softmax')
get_dl_model_layer_gradients (GradientsDense, DLModelHandle, 'dense')
get_dl_model_layer_gradients (GradientsConv, DLModelHandle, 'conv')
*
get_dl_model_layer_weights (WeightsDense, DLModelHandle, 'dense',\
'weights_gradient')
get_dl_model_layer_weights (WeightsConv, DLModelHandle, 'conv',\
'weights_gradient')
*
get_dl_model_layer_activations (ActivationsDense, DLModelHandle, 'dense')
get_dl_model_layer_activations (ActivationsConv, DLModelHandle, 'conv')

Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
create_dl_model, train_dl_classifier_batch, set_dl_model_layer_weights
Possible Successors
set_dl_model_layer_weights
Alternatives
get_dl_model_layer_activations, get_dl_model_layer_gradients
Module
Foundation. This operator uses dynamic licensing (see the ’Installation Guide’). Which of the following modules
is required depends on the specific usage of the operator:
Deep Learning Professional

load_dl_model_weights ( : : DLModelHandleSource,
DLModelHandleTarget : ChangesByLayer )

Load the weights of a source model into a target model.


The operator load_dl_model_weights loads weights of a source model DLModelHandleSource into a
target model DLModelHandleTarget. Thereby applies for every layer in the target model: Its weights are
only changed if there is a layer in the source model having the same name and the same weight-shape. Note that
DLModelHandleSource must be different from DLModelHandleTarget, i.e., you cannot use the same
model handle as source and target.
ChangesByLayer is a tuple indicating for every target layer how many weights changed. Its entries are sorted by
ascending layer IDs. The layer IDs can be queried via the operator get_dl_model_param with the parameter
’summary’.
Note, that ’weights’ means all weights and biases for all layers which can have such values (e.g., convolutional
layer, batch normalization layer, etc.).

HALCON 24.11.1.0
722 CHAPTER 9 DEEP LEARNING

Parameters
. DLModelHandleSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the source deep learning model.
. DLModelHandleTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the target deep learning model.
. ChangesByLayer (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Indicates for every target layer how many weights changed.
Result
If the parameters are valid, the operator load_dl_model_weights returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Module
Deep Learning Professional

set_dl_model_layer_param ( : : DLModelHandle, LayerName,


ParamName, ParamValue : )

Set parameter values of a given layer.


The operator set_dl_model_layer_param sets the value ParamValue of the parameter ParamName for
a layer. The layer is referred to by its name LayerName in the model DLModelHandle. You can retrieve the
layer names using get_dl_model_param with its option ’layer_names’ or ’summary’.
Which generic and layer-specific parameters can be set is described in the entries of the operators
(create_dl_layer_*), which are used for creating the layer.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Deep learning model.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the set parameter.
Default: []
List of values: ParamName ∈ {’bias_filler’, ’bias_filler_variance_norm’, ’bias_filler_const_val’,
’is_inference_output’, ’leaky_relu_alpha’, ’learning_rate_multiplier’, ’learning_rate_multiplier_bias’, ’name’,
’upper_bound’, ’weight_filler’, ’weight_filler_const_val’, ’weight_filler_variance_norm’}
. ParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Value of the set parameter.
Default: []
List of values: ParamValue ∈ {’xavier’, ’msra’, ’const’, ’norm_in’, ’norm_out’, ’norm_average’, ’true’,
’false’}
Execution Information

• Multithreading type: reentrant (runs in parallel with non-exclusive operators).


• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
See also
set_dl_model_param, get_dl_model_param, get_dl_model_layer_param

HALCON/HDevelop Reference Manual, 2024-11-13


9.3. FRAMEWORK 723

Module
Foundation. This operator uses dynamic licensing (see the ’Installation Guide’). Which of the following modules
is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Deep Learning Professional

set_dl_model_layer_weights ( Weights : : DLModelHandle, LayerName,


WeightsType : )

Set the weights (or values) of a Deep Learning model layer.


The operator set_dl_model_layer_weights sets for the model DLModelHandle the given Weights in
the specified LayerName.
The parameter WeightsType determines which type of layer values are set. Which values can be set, please
refer to the get_dl_model_layer_weights documentation.
Attention
The operator set_dl_model_layer_weights is only applicable to self-created networks. For networks
delivered by HALCON, the operator does have no impact.
Parameters
. Weights (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : real
Input weights.
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the deep learning model.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the layer, whose weights are to be set.
. WeightsType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selected type of layer values to be set.
Default: ’weights’
List of values: WeightsType ∈ {’weights’, ’bias’, ’batchnorm_mean’, ’batchnorm_variance’,
’batchnorm_mean_avg’, ’batchnorm_variance_avg’}
Example

* Create weights for a convolution layer.


gen_image_const (Weights, 'real', 1, 1)
paint_region (Weights, Weights, Weights, 1, 'fill')
gen_empty_obj (WeightsArray)
for Index := 0 to 10 by 1
concat_obj (WeightsArray, Weights, WeightsArray)
endfor
*
* Input image with rows consisting of 1s to 10s.
gen_image_const (Image, 'real', 10, 10)
for Index := 0 to 9 by 1
gen_rectangle1 (Rectangle, Index, 0, Index, 9)
paint_region (Rectangle, Image, Image, Index + 1, 'fill')
endfor
*
* Create a small model network.
create_dl_layer_input ('image', [10, 2, 1], [], [], ImageNode)
create_dl_layer_convolution (ImageNode, 'conv', 1, 1, 2, 11, 1, 'none', \
'