Reference Hdevelop
Reference Hdevelop
HALCON/HDevelop
Operator Reference (en)
1 1D Measuring 1
close_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
deserialize_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
fuzzy_measure_pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
fuzzy_measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
fuzzy_measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
gen_measure_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
gen_measure_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
get_measure_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
measure_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
measure_pos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
measure_projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
measure_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
read_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
reset_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
serialize_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
set_fuzzy_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
set_fuzzy_measure_norm_pair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
translate_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
write_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2 2D Metrology 31
add_metrology_object_circle_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
add_metrology_object_ellipse_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
add_metrology_object_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
add_metrology_object_line_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
add_metrology_object_rectangle2_measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
align_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
apply_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
clear_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
clear_metrology_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
copy_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
create_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
deserialize_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
get_metrology_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
get_metrology_object_fuzzy_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
get_metrology_object_indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
get_metrology_object_measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
get_metrology_object_model_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
get_metrology_object_num_instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
get_metrology_object_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
get_metrology_object_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
get_metrology_object_result_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
read_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
reset_metrology_object_fuzzy_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
reset_metrology_object_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
serialize_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
set_metrology_model_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
set_metrology_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
set_metrology_object_fuzzy_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
set_metrology_object_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
write_metrology_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3 3D Matching 77
3.1 3D Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
find_box_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3.2 3D Gripping Point Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.3 Deep 3D Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
apply_deep_matching_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
get_deep_matching_3d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
read_deep_matching_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
set_deep_matching_3d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
write_deep_matching_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
3.4 Deformable Surface-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
add_deformable_surface_model_reference_point . . . . . . . . . . . . . . . . . . . . . . . . . 95
add_deformable_surface_model_sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
clear_deformable_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
clear_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
create_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
deserialize_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
find_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
get_deformable_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
get_deformable_surface_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
read_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
refine_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
serialize_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
write_deformable_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.5 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
clear_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
create_cam_pose_look_at_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
create_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
deserialize_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
find_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
get_shape_model_3d_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
get_shape_model_3d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
project_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
read_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
serialize_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
trans_pose_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
write_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.6 Surface-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
clear_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
clear_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
create_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
deserialize_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
find_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
find_surface_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
get_surface_matching_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
get_surface_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
read_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
refine_surface_model_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
refine_surface_model_pose_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
serialize_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
set_surface_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
write_surface_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5 3D Reconstruction 249
5.1 Binocular Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
binocular_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
binocular_disparity_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
binocular_disparity_ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
binocular_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
binocular_distance_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
binocular_distance_ms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
disparity_image_to_xyz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
disparity_to_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
disparity_to_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
distance_to_disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
essential_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
gen_binocular_proj_rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
gen_binocular_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
intersect_lines_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
match_essential_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
match_fundamental_matrix_distortion_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
match_fundamental_matrix_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
match_rel_pose_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
reconst3d_from_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
rel_pose_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
vector_to_essential_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
vector_to_fundamental_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
vector_to_fundamental_matrix_distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
vector_to_rel_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.2 Depth From Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
depth_from_focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
select_grayvalues_from_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
5.3 Multi-View Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
clear_stereo_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
create_stereo_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
get_stereo_model_image_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
get_stereo_model_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
get_stereo_model_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
get_stereo_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
reconstruct_points_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
reconstruct_surface_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
set_stereo_model_image_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
set_stereo_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
5.4 Photometric Stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
estimate_al_am . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
estimate_sl_al_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
estimate_sl_al_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
estimate_tilt_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
estimate_tilt_zc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
photometric_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
reconstruct_height_field_from_gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
sfs_mod_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
sfs_orig_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
sfs_pentland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
shade_height_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
uncalibrated_photometric_stereo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
5.5 Sheet of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
apply_sheet_of_light_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
calibrate_sheet_of_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
clear_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
create_sheet_of_light_calib_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
create_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
deserialize_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
get_sheet_of_light_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
get_sheet_of_light_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
get_sheet_of_light_result_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
measure_profile_sheet_of_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
query_sheet_of_light_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
read_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
reset_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
serialize_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
set_profile_sheet_of_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
set_sheet_of_light_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
write_sheet_of_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
5.6 Structured Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
6 Calibration 371
6.1 Binocular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
binocular_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
6.2 Calibration Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
caltab_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
create_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
disp_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
find_calib_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
find_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
find_marks_and_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
gen_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
sim_caltab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
6.3 Camera Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
cam_mat_to_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
cam_par_to_cam_mat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
deserialize_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
read_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
serialize_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
write_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
6.4 Hand-Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
calibrate_hand_eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
get_calib_data_observ_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
hand_eye_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
set_calib_data_observ_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
6.5 Inverse Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
get_line_of_sight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
6.6 Monocular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
camera_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
6.7 Multi-View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
calibrate_cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
clear_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
clear_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
create_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
create_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
deserialize_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
deserialize_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
get_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
get_calib_data_observ_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
get_calib_data_observ_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
get_camera_setup_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
query_calib_data_observ_indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
read_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
read_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
remove_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
remove_calib_data_observ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
serialize_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
serialize_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
set_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
set_calib_data_calib_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
set_calib_data_cam_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
set_calib_data_observ_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
set_camera_setup_cam_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
set_camera_setup_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
write_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
write_camera_setup_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
6.8 Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
cam_par_pose_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
project_3d_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
project_hom_point_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
project_point_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
6.9 Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
change_radial_distortion_cam_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
change_radial_distortion_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
change_radial_distortion_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
change_radial_distortion_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
contour_to_world_plane_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
gen_image_to_world_plane_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
gen_radial_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
image_points_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
image_to_world_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
6.10 Self-Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
radial_distortion_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
radiometric_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
stationary_camera_self_calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
7 Classification 497
7.1 Gaussian Mixture Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
add_class_train_data_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
add_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
classify_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
clear_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
clear_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
create_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
deserialize_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
evaluate_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
get_class_train_data_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
get_params_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
get_prep_info_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
get_sample_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
get_sample_num_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
read_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
read_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
select_feature_set_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
serialize_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
train_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
write_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
write_samples_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
7.2 K-Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
add_class_train_data_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
add_sample_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
classify_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
clear_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
create_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
deserialize_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
get_class_train_data_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
get_params_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524
get_sample_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
get_sample_num_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
read_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
select_feature_set_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
serialize_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
set_params_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
train_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
write_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
7.3 Look-Up Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
clear_class_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
create_class_lut_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
create_class_lut_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
create_class_lut_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
create_class_lut_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
7.4 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
add_sample_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
clear_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
create_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
deserialize_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
get_sample_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
get_sample_num_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
read_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
select_sub_feature_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
serialize_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
set_feature_lengths_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
write_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
7.5 Neural Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
add_class_train_data_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
add_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
classify_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
clear_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
clear_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
create_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
deserialize_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
evaluate_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
get_class_train_data_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
get_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
get_prep_info_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
get_regularization_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562
get_rejection_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
get_sample_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
get_sample_num_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
read_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565
read_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566
select_feature_set_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
serialize_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
set_regularization_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
set_rejection_params_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
train_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
write_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
write_samples_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
7.6 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
add_class_train_data_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
add_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
classify_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
clear_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
clear_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
create_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
deserialize_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
evaluate_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
get_class_train_data_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
get_params_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
get_prep_info_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
get_sample_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
get_sample_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
get_support_vector_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
get_support_vector_num_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
read_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
read_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
reduce_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
select_feature_set_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
serialize_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
train_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
write_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
write_samples_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
8 Control 605
assign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
assign_at . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
break . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
catch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
comment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
continue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
convert_tuple_to_vector_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609
convert_vector_to_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
default . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
else . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
elseif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
endfor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
endif . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
endswitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
endtry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
endwhile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
executable_expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
exit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613
export_def . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
global . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
if . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
par_join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
return . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
stop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
throw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
try . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
until . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
while . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625
10 Develop 809
dev_clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
dev_clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
dev_close_inspect_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810
dev_close_tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
dev_close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811
dev_disp_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
dev_display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 814
dev_error_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815
dev_get_exception_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816
dev_get_preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
dev_get_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
dev_get_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818
dev_inspect_ctrl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819
dev_open_dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
dev_open_file_dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 820
dev_open_tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 821
dev_open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 825
dev_set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828
dev_set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
dev_set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
dev_set_contour_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 832
dev_set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
dev_set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833
dev_set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
dev_set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
dev_set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
dev_set_preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 836
dev_set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
dev_set_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
dev_set_tool_geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
dev_set_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
dev_set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
dev_show_tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 842
dev_update_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
dev_update_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843
dev_update_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
dev_update_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
11 File 847
11.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
close_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
fnew_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847
fread_bytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848
fread_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
fread_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
fread_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
fwrite_bytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
fwrite_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
open_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854
11.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
deserialize_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
image_to_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856
memory_block_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
read_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
read_image_metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
read_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
serialize_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863
write_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864
write_image_metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867
11.3 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
copy_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
delete_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
file_exists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
get_current_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
list_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 869
make_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
read_world_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
remove_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
set_current_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
11.4 Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
deserialize_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
read_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
serialize_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873
write_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
11.5 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
deserialize_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
read_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
serialize_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
write_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
11.6 Tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
deserialize_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
deserialize_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 878
read_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
serialize_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
serialize_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
tuple_is_serializable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
tuple_is_serializable_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
write_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 882
11.7 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
deserialize_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
read_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883
read_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
read_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
read_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
serialize_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 888
write_contour_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
write_contour_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
write_polygon_xld_arc_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
write_polygon_xld_dxf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
12 Filters 895
12.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
abs_diff_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
abs_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
acos_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 899
add_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
asin_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 901
atan2_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
atan_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
cos_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
div_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904
exp_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
gamma_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
invert_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
log_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
max_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908
min_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
mult_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 910
pow_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
scale_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
sin_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
sqrt_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
sub_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 915
tan_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
12.2 Bit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
bit_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
bit_lshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918
bit_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
bit_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920
bit_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
bit_rshift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
bit_slice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
bit_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
12.3 Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
apply_color_trans_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
cfa_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
clear_color_trans_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
create_color_trans_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927
gen_principal_comp_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
linear_trans_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
principal_comp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
rgb1_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
rgb3_to_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
trans_from_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
trans_to_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
12.4 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
close_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
close_edges_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
derivate_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
diff_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
edges_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
edges_color_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955
edges_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
edges_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
frei_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
frei_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 964
highpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
info_edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
kirsch_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
kirsch_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
laplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
laplace_of_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972
prewitt_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
prewitt_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 974
roberts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
robinson_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977
robinson_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 978
sobel_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
sobel_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
12.5 Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
coherence_enhancing_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
emphasize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985
equ_histo_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
equ_histo_image_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
illuminate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 989
mean_curvature_flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
scale_image_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 992
shock_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
12.6 FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
convol_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
convol_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
correlation_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
deserialize_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997
energy_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998
fft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
fft_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1001
fft_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
gen_bandfilter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
gen_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
gen_derivative_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
gen_filter_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
gen_gabor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007
gen_gauss_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1009
gen_highpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
gen_lowpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
gen_mean_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
gen_sin_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1014
gen_std_bandpass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
optimize_fft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
optimize_rft_speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018
phase_correlation_fft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
phase_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020
phase_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1021
power_byte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
power_ln . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
power_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
read_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1024
rft_generic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025
serialize_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1026
write_fft_optimization_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
12.7 Geometric Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
affine_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
affine_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030
convert_map_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1032
map_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1033
mirror_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
polar_trans_image_ext . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
polar_trans_image_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037
projective_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
projective_trans_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1041
rotate_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
zoom_image_factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1044
zoom_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
12.8 Inpainting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
harmonic_interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046
inpainting_aniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1047
inpainting_ced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1050
inpainting_ct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1052
inpainting_mcf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055
inpainting_texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
12.9 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
bandpass_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058
lines_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
lines_facet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1061
lines_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
12.10 Match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066
exhaustive_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1066
exhaustive_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1067
gen_gauss_pyramid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
monotony . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1070
12.11 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
convol_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071
deviation_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
expand_domain_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1073
gray_inside . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
gray_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076
lut_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1078
topographic_sketch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079
12.12 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
add_noise_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1080
add_noise_white . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1081
gauss_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
noise_distribution_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083
sp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
12.13 Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
derivate_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
optical_flow_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1086
unwarp_image_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
vector_field_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
12.14 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
corner_response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
dots_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
points_foerstner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
points_harris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
points_harris_binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103
points_lepetit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104
points_sojka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105
12.15 Scene Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107
scene_flow_calib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107
scene_flow_uncalib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1109
12.16 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114
anisotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1119
bilateral_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1120
binomial_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
eliminate_min_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
eliminate_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1128
fill_interlace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1129
gauss_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1130
guided_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1132
info_smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
isotropic_diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
mean_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137
mean_image_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139
mean_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1140
mean_sp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
median_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
median_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144
median_separate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145
median_weighted . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1147
midrange_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
rank_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1149
rank_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1151
rank_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152
sigma_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154
smooth_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
trimmed_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
12.17 Texture Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158
deviation_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1158
entropy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159
texture_laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
12.18 Wiener Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1163
gen_psf_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1163
gen_psf_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164
simulate_defocus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
simulate_motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1166
wiener_filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168
wiener_filter_ni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1169
13 Graphics 1173
13.1 3D Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173
add_scene_3d_camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1173
add_scene_3d_instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174
add_scene_3d_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174
add_scene_3d_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
clear_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
create_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
display_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1179
get_display_scene_3d_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1180
remove_scene_3d_camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181
remove_scene_3d_instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181
remove_scene_3d_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
remove_scene_3d_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
render_scene_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
set_scene_3d_camera_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
set_scene_3d_instance_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184
set_scene_3d_instance_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
set_scene_3d_label_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1187
set_scene_3d_light_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
set_scene_3d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189
set_scene_3d_to_world_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1190
13.2 Drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1191
drag_region1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1192
drag_region2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
drag_region3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1194
draw_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195
draw_circle_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
draw_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197
draw_ellipse_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
draw_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
draw_line_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201
draw_nurbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1202
draw_nurbs_interp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
draw_nurbs_interp_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
draw_nurbs_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
draw_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
draw_point_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
draw_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
draw_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
draw_rectangle1_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
draw_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215
draw_rectangle2_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1216
draw_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217
draw_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
draw_xld_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
13.3 LUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
get_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1221
query_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
set_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1222
13.4 Mouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
get_mbutton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
get_mbutton_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
get_mposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
get_mposition_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1228
get_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
query_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
send_mouse_double_click_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
send_mouse_down_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
send_mouse_drag_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
send_mouse_up_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
set_mshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
13.5 Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
attach_background_to_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1234
attach_drawing_object_to_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
clear_drawing_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
create_drawing_object_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237
create_drawing_object_circle_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
create_drawing_object_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
create_drawing_object_ellipse_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
create_drawing_object_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1242
create_drawing_object_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
create_drawing_object_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
create_drawing_object_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1245
create_drawing_object_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
detach_background_from_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247
detach_drawing_object_from_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
get_drawing_object_iconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
get_drawing_object_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
get_window_background_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1250
set_content_update_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1251
set_drawing_object_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
set_drawing_object_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253
set_drawing_object_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1255
13.6 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
disp_arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
disp_arrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257
disp_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
disp_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1259
disp_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261
disp_cross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1261
disp_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262
disp_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1264
disp_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1265
disp_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266
disp_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267
disp_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1271
disp_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1272
disp_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
disp_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275
disp_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
13.7 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
convert_coordinates_image_to_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1276
convert_coordinates_window_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
get_contour_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
get_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
get_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1280
get_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1280
get_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
get_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
get_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
get_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
get_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1283
get_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284
get_rgba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
get_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1285
get_window_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
query_all_colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
query_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
query_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
query_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289
query_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
query_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1290
query_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
set_color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1291
set_colored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
set_contour_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
set_draw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
set_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
set_hsi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
set_icon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297
set_line_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1298
set_line_width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1299
set_paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1300
set_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1302
set_part_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1303
set_rgb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
set_rgba . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305
set_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1306
set_window_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
13.8 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309
disp_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1309
get_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1312
get_font_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
get_string_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
get_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
new_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
query_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316
read_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316
read_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1317
set_font . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1318
set_tposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
write_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1320
13.9 Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
clear_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
close_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
copy_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
dump_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1324
dump_window_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326
flush_buffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1326
get_disp_object_model_3d_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
get_os_window_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1328
get_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
get_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330
get_window_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1331
get_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1332
new_extern_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
open_window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1335
query_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1338
set_window_attr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1339
set_window_dc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
set_window_extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
set_window_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1341
unproject_coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1342
update_window_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1343
14 Identification 1347
14.1 Bar Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1347
clear_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
create_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1349
decode_bar_code_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1351
deserialize_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1352
find_bar_code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1353
get_bar_code_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356
get_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1358
get_bar_code_param_specific . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1360
get_bar_code_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1361
query_bar_code_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
read_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
serialize_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1370
set_bar_code_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371
set_bar_code_param_specific . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
write_bar_code_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1383
14.2 Data Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
clear_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386
create_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1387
deserialize_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
find_data_code_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
get_data_code_2d_objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
get_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401
get_data_code_2d_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1405
query_data_code_2d_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1422
read_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1424
serialize_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
set_data_code_2d_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1425
write_data_code_2d_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
15 Image 1437
15.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
get_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442
get_grayval_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443
get_grayval_interpolated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1444
get_image_pointer1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1446
get_image_pointer1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
get_image_pointer3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
get_image_size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1449
get_image_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
get_image_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1450
15.2 Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
close_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
get_framegrabber_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1452
get_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1453
get_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1454
grab_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
grab_data_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
grab_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1457
grab_image_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1458
grab_image_start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1459
info_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1461
open_framegrabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463
set_framegrabber_callback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1465
set_framegrabber_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
set_framegrabber_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1467
15.3 Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
access_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
append_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1469
channels_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
compose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1470
compose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1471
compose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1472
compose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1473
compose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1473
compose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474
count_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
decompose2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476
decompose3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477
decompose4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478
decompose5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479
decompose6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1479
decompose7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1480
image_to_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
15.4 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
copy_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
gen_image1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
gen_image1_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
gen_image1_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
gen_image3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1487
gen_image3_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1489
gen_image_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1491
gen_image_gray_ramp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493
gen_image_interleaved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1494
gen_image_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1496
gen_image_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497
gen_image_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499
interleave_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1501
region_to_bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503
region_to_label . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1504
region_to_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505
15.5 Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506
add_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506
change_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1506
full_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507
get_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1508
rectangle1_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1508
reduce_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
15.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1510
area_center_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1510
cooc_feature_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1511
cooc_feature_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1512
elliptic_axis_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1513
entropy_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1514
estimate_noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1515
fit_surface_first_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517
fit_surface_second_order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1519
fuzzy_entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1520
fuzzy_perimeter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1521
gen_cooc_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1522
gray_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1524
gray_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525
gray_histo_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1526
gray_histo_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527
gray_projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1528
histo_2dim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1529
intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1530
min_max_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1531
moments_gray_plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1533
plane_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
select_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535
shape_histo_all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537
shape_histo_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1538
15.7 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539
add_image_border . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539
change_format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1540
crop_domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
crop_domain_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541
crop_part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1542
crop_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1543
crop_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1544
tile_channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1546
tile_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1547
tile_images_offset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548
15.8 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1550
overpaint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1550
overpaint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1551
paint_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
paint_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1553
paint_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1554
set_grayval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1556
15.9 Type Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1557
complex_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1557
convert_image_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1558
real_to_complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1558
real_to_vector_field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559
vector_field_to_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559
16 Inspection 1561
16.1 Bead Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
apply_bead_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
clear_bead_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1562
create_bead_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563
get_bead_inspection_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565
set_bead_inspection_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1566
16.2 OCV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1567
close_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1567
create_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1568
deserialize_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1569
do_ocv_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1570
read_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1571
serialize_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1572
traind_ocv_proj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1572
write_ocv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1573
16.3 Structured Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1574
clear_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1575
create_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1576
decode_structured_light_pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1577
deserialize_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1578
gen_structured_light_pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1579
get_structured_light_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1583
get_structured_light_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1585
read_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1586
reconstruct_surface_structured_light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1587
serialize_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1588
set_structured_light_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589
write_structured_light_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1593
16.4 Texture Inspection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1594
add_texture_inspection_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1598
apply_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1599
clear_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1600
clear_texture_inspection_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1601
create_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1602
deserialize_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1603
get_texture_inspection_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605
get_texture_inspection_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605
get_texture_inspection_result_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607
read_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1608
remove_texture_inspection_model_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
serialize_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611
set_texture_inspection_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1612
train_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616
write_texture_inspection_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1618
16.5 Variation Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1619
clear_train_data_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1619
clear_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1620
compare_ext_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1620
compare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1622
create_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1623
deserialize_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
get_thresh_images_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
get_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1626
prepare_direct_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1626
prepare_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1628
read_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629
serialize_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1630
train_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
write_variation_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1632
17 Legacy 1633
17.1 2D Metrology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
copy_metrology_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
transform_metrology_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1634
17.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
clear_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
close_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635
create_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1636
descript_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
deserialize_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
enquire_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
enquire_reject_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639
get_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1640
learn_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1641
learn_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1642
read_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
read_sampset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
serialize_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
set_class_box_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
test_sampset_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
write_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1647
17.3 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1648
ifelse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1648
17.4 DL Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1648
apply_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
clear_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1652
clear_dl_classifier_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
clear_dl_classifier_train_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1653
deserialize_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1654
get_dl_classifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
get_dl_classifier_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1656
get_dl_classifier_train_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657
read_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
serialize_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1660
set_dl_classifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1661
train_dl_classifier_batch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1663
write_dl_classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
17.5 Develop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
dev_map_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666
dev_map_prog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
dev_map_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
dev_unmap_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1667
dev_unmap_prog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1668
dev_unmap_var . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1668
17.6 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1669
gauss_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1669
polar_trans_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1670
17.7 Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
clear_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1671
disp_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1672
disp_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1673
get_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1674
get_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
get_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
get_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676
get_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1676
get_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1677
get_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
get_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1678
move_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1679
open_textwindow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1680
query_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1684
query_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
set_comprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1685
set_fix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686
set_fixed_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1687
set_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1688
set_line_approx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1688
set_lut_style . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689
set_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1690
set_tshape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1691
slide_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1692
write_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
17.8 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
add_sample_identifier_preparation_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1696
add_sample_identifier_training_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1698
apply_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1699
clear_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1701
create_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1702
deserialize_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1704
get_sample_identifier_object_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705
get_sample_identifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1706
prepare_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1707
read_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1709
remove_sample_identifier_preparation_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1710
remove_sample_identifier_training_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1711
serialize_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
set_sample_identifier_object_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1712
set_sample_identifier_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1713
train_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1715
write_sample_identifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1716
17.9 Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
adapt_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1717
best_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1718
best_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1719
best_match_pre_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1721
best_match_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1722
best_match_rot_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1723
clear_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725
create_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1725
create_template_rot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1727
deserialize_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1729
fast_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1729
fast_match_mg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1730
read_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1731
serialize_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1732
set_offset_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
set_reference_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1733
write_template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1734
17.10 Matching, Component-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
clear_all_component_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
clear_all_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1735
clear_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
clear_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1736
cluster_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1737
create_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1738
create_trained_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1741
deserialize_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1743
deserialize_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1744
find_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1745
gen_initial_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1750
get_component_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1752
get_component_model_tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1753
get_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1755
get_found_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1756
get_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1758
inspect_clustered_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1759
modify_component_relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1760
read_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
read_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1762
serialize_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
serialize_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1763
train_model_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1764
write_component_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1768
write_training_components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
17.11 Morphology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
closing_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1769
dilation_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1770
dilation_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1772
erosion_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1773
erosion_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1774
fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1775
gen_struct_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1776
golay_elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1777
hit_or_miss_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1780
hit_or_miss_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1781
morph_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1782
morph_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1784
morph_skiz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1785
opening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1786
opening_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1787
thickening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1788
thickening_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1789
thickening_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1790
thinning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1792
thinning_golay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1793
thinning_seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1794
17.12 OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1795
close_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1795
create_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1796
create_text_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1799
deserialize_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1799
do_ocr_multi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1800
do_ocr_single . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1801
info_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1801
ocr_change_char . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1802
ocr_get_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1803
read_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1804
serialize_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1804
testd_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1805
traind_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1806
trainf_ocr_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
write_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1807
17.13 Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808
get_region_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1808
hamming_change_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1809
interjacent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1810
17.14 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1812
bin_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1812
class_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1812
expand_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1813
learn_ndim_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1815
17.15 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1816
approx_chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1816
approx_chain_simple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1820
clear_all_bar_code_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1821
clear_all_barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1822
clear_all_calib_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1822
clear_all_camera_setup_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1822
clear_all_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1823
clear_all_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1823
clear_all_class_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1824
clear_all_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1824
clear_all_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825
clear_all_class_train_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825
clear_all_color_trans_luts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826
clear_all_conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826
clear_all_data_code_2d_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1826
clear_all_deformable_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1827
clear_all_descriptor_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1827
clear_all_events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1828
clear_all_lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1828
clear_all_matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829
clear_all_metrology_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829
clear_all_mutexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1829
clear_all_ncc_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1830
clear_all_object_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1830
clear_all_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1831
clear_all_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1831
clear_all_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1832
clear_all_sample_identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1832
clear_all_scattered_data_interpolators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1832
clear_all_serialized_items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1833
clear_all_shape_model_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1833
clear_all_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1834
clear_all_sheet_of_light_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1834
clear_all_stereo_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835
clear_all_surface_matching_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835
clear_all_surface_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
clear_all_templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
clear_all_text_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836
clear_all_text_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1837
clear_all_variation_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1837
close_all_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1838
close_all_class_box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1838
close_all_files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1839
close_all_framegrabbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1839
close_all_measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1839
close_all_ocrs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1840
close_all_ocvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1840
close_all_serials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1841
close_all_sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1841
distance_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1841
filter_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1842
intersection_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846
partition_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1847
read_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1849
select_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1851
select_lines_longest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1853
update_kalman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1854
17.16 XLD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
union_straight_contours_histo_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856
18 Matching 1859
18.1 Correlation-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1859
clear_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1859
create_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1859
deserialize_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1861
determine_ncc_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1862
find_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1863
find_ncc_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1867
get_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1871
get_ncc_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1872
get_ncc_model_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1873
read_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1873
serialize_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1874
set_ncc_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1875
set_ncc_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1875
write_ncc_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1876
18.2 Deep Counting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1877
apply_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1878
create_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1879
get_deep_counting_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1880
prepare_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1882
read_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1883
set_deep_counting_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1884
write_deep_counting_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1884
18.3 Deformable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1885
clear_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1885
create_local_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1886
create_local_deformable_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1888
create_planar_calib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1890
create_planar_calib_deformable_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1893
create_planar_uncalib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1895
create_planar_uncalib_deformable_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . 1899
deserialize_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1902
determine_deformable_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1903
find_local_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1906
find_planar_calib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1908
find_planar_uncalib_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1910
get_deformable_model_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1915
get_deformable_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1916
get_deformable_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1916
read_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1918
serialize_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1918
set_deformable_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1919
set_deformable_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1920
set_local_deformable_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1921
set_planar_calib_deformable_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1922
set_planar_uncalib_deformable_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 1923
write_deformable_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
18.4 Descriptor-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
clear_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1925
create_calib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1926
create_uncalib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1928
deserialize_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1931
find_calib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1931
find_uncalib_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1933
get_descriptor_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936
get_descriptor_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1936
get_descriptor_model_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1937
get_descriptor_model_results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1938
read_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1939
serialize_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1940
set_descriptor_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1941
write_descriptor_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1941
18.5 Shape-Based . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1942
adapt_shape_model_high_noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1942
clear_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1943
create_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1943
create_aniso_shape_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1948
create_generic_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953
create_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953
create_scaled_shape_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1958
create_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1962
create_shape_model_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1966
deserialize_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1970
determine_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1970
find_aniso_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1973
find_aniso_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979
find_generic_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1986
find_scaled_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1988
find_scaled_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1993
find_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2000
find_shape_models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2005
get_generic_shape_model_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2012
get_generic_shape_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2012
get_generic_shape_model_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2014
get_generic_shape_model_result_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2016
get_shape_model_clutter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017
get_shape_model_contours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2017
get_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2018
get_shape_model_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2019
inspect_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2020
read_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2021
serialize_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2022
set_generic_shape_model_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2023
set_generic_shape_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2024
set_shape_model_clutter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2036
set_shape_model_metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2039
set_shape_model_origin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
set_shape_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2042
train_generic_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
write_shape_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2044
19 Matrix 2045
19.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2045
get_diagonal_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2045
get_full_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046
get_sub_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
get_value_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2048
set_diagonal_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2049
set_full_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2052
set_sub_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2053
set_value_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2054
19.2 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055
abs_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2055
abs_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2056
add_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2057
add_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2058
div_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2059
div_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2060
invert_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2061
invert_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2063
mult_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2065
mult_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2066
mult_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2067
mult_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2069
pow_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2071
pow_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2072
pow_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2073
pow_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2074
pow_scalar_element_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2076
pow_scalar_element_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2077
scale_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2078
scale_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2079
solve_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2080
sqrt_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2082
sqrt_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2082
sub_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2083
sub_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2084
transpose_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2085
transpose_matrix_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2086
19.3 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2087
clear_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2087
copy_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2087
create_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2088
repeat_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2090
19.4 Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2091
decompose_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2091
orthogonal_decompose_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2093
svd_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2097
19.5 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2099
eigenvalues_general_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2099
eigenvalues_symmetric_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2100
generalized_eigenvalues_general_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2101
generalized_eigenvalues_symmetric_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2103
19.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2104
determinant_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2104
get_size_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2105
max_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2106
mean_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2107
min_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2109
norm_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2110
sum_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2111
19.7 File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
deserialize_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
read_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2113
serialize_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2114
write_matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2114
20 Morphology 2117
20.1 Gray Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2117
dual_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2119
gen_disc_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2121
gray_bothat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2122
gray_closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2123
gray_closing_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2124
gray_closing_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2125
gray_dilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2126
gray_dilation_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2127
gray_dilation_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2128
gray_erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2129
gray_erosion_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2130
gray_erosion_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2131
gray_opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2132
gray_opening_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2133
gray_opening_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2134
gray_range_rect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2135
gray_tophat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2136
read_gray_se . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2137
20.2 Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2138
bottom_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2140
boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2141
closing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2143
closing_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2144
closing_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2146
dilation1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2147
dilation2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2148
dilation_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2150
dilation_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2151
erosion1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2153
erosion2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2154
erosion_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2155
erosion_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2157
hit_or_miss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2158
minkowski_add1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2159
minkowski_add2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2161
minkowski_sub1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2162
minkowski_sub2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2164
opening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2165
opening_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2166
opening_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2167
pruning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2168
top_hat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2169
21 OCR 2171
21.1 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2171
clear_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2171
deserialize_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2171
do_ocr_multi_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2172
do_ocr_single_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2173
do_ocr_word_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2174
get_params_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2176
query_params_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2177
read_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2177
serialize_ocr_class_cnn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2178
21.2 Deep OCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2179
apply_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2183
create_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2185
get_deep_ocr_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2186
read_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2191
set_deep_ocr_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2192
write_deep_ocr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2193
21.3 K-Nearest Neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2193
clear_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2193
create_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2194
deserialize_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2197
do_ocr_multi_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2198
do_ocr_single_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2199
do_ocr_word_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2200
get_features_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2201
get_params_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2202
read_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2203
select_feature_set_trainf_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2204
serialize_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2205
trainf_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2206
write_ocr_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2207
21.4 Lexica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2208
clear_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2208
create_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2209
import_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2209
inspect_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2210
lookup_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2210
suggest_lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2211
21.5 Neural Nets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2212
clear_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2212
create_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2212
deserialize_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2216
do_ocr_multi_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2217
do_ocr_single_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2217
do_ocr_word_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2218
get_features_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2220
get_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2221
get_prep_info_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2222
get_regularization_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2224
get_rejection_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2224
read_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2225
select_feature_set_trainf_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2226
select_feature_set_trainf_mlp_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2228
serialize_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2229
set_regularization_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2230
set_rejection_params_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2232
trainf_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2233
trainf_ocr_class_mlp_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2234
write_ocr_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
21.6 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
clear_text_model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2236
clear_text_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
create_text_model_reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2237
find_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2239
get_text_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2240
get_text_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2241
get_text_result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2242
segment_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2244
select_characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2246
set_text_model_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2249
text_line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2254
text_line_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255
21.7 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257
clear_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257
create_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2257
deserialize_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2261
do_ocr_multi_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2261
do_ocr_single_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2262
do_ocr_word_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2263
get_features_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265
get_params_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265
get_prep_info_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2266
get_support_vector_num_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2268
get_support_vector_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2269
read_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2269
reduce_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2270
select_feature_set_trainf_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2271
select_feature_set_trainf_svm_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2273
serialize_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2274
trainf_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2275
trainf_ocr_class_svm_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2276
write_ocr_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2277
21.8 Training Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2278
append_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2278
concat_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2279
protect_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2280
read_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2281
read_ocr_trainf_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2282
read_ocr_trainf_names_protected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2282
read_ocr_trainf_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2283
write_ocr_trainf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2284
write_ocr_trainf_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2284
22 Object 2287
22.1 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2288
compare_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2288
count_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2289
get_channel_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2289
get_obj_class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2290
test_equal_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2291
22.2 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
clear_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
concat_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2292
copy_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2293
gen_empty_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2295
insert_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2295
integer_to_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2296
obj_diff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2297
obj_to_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2297
remove_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2298
replace_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2299
select_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2300
23 Regions 2303
23.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2303
get_region_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2303
get_region_convex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2304
get_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2304
get_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2305
get_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2306
23.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2307
gen_checker_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2307
gen_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2308
gen_circle_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2310
gen_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2312
gen_ellipse_sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2313
gen_empty_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315
gen_grid_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2315
gen_random_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2317
gen_random_regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2318
gen_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2320
gen_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2321
gen_region_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2323
gen_region_histo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2323
gen_region_hline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2324
gen_region_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2325
gen_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2326
gen_region_polygon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2327
gen_region_polygon_filled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2328
gen_region_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2329
gen_region_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2330
label_to_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2331
23.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2332
area_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2339
area_holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2340
circularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2341
compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2342
connect_and_holes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2343
contlength . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2344
convexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2345
diameter_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2346
eccentricity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2347
elliptic_axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2348
euler_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2349
find_neighbors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2350
get_region_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2351
get_region_thickness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2352
hamming_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2352
hamming_distance_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2353
height_width_ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
inner_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2355
inner_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
moments_region_2nd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357
moments_region_2nd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2359
moments_region_2nd_rel_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2360
moments_region_3rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2361
moments_region_3rd_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2361
moments_region_central . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2362
moments_region_central_invar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2363
orientation_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2364
rectangularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2365
region_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2366
roundness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2369
runlength_distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2370
runlength_features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2371
select_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2372
select_region_spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2373
select_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2374
select_shape_proto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2377
select_shape_std . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2379
smallest_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2380
smallest_rectangle1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2382
smallest_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2383
spatial_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2384
23.4 Geometric Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2386
affine_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2386
mirror_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2387
move_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2388
polar_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2389
polar_trans_region_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2391
projective_trans_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2393
transpose_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2394
zoom_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396
23.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396
complement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2396
difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2397
intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2398
symm_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2399
union1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2400
union2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2401
23.6 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2401
test_equal_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2401
test_region_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2402
test_region_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2403
test_subset_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2404
23.7 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2405
background_seg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2405
clip_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2406
clip_region_rel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2407
closest_point_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2408
connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2410
distance_transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2411
eliminate_runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2412
expand_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2413
fill_up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2415
fill_up_shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2415
junctions_skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2416
merge_regions_line_scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2417
partition_dynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2418
partition_rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2419
rank_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2420
remove_noise_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2421
shape_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2422
skeleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2423
sort_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2424
split_skeleton_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2426
split_skeleton_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2427
24 Segmentation 2429
24.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2429
add_samples_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2429
add_samples_image_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2430
add_samples_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2431
add_samples_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2432
class_2dim_sup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2433
class_2dim_unsup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2435
class_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2437
classify_image_class_gmm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2439
classify_image_class_knn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2440
classify_image_class_lut . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2441
classify_image_class_mlp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2442
classify_image_class_svm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2444
learn_ndim_norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2445
24.2 Edges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2446
detect_edge_segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2446
hysteresis_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2448
nonmax_suppression_amp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2449
nonmax_suppression_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2450
24.3 Maximally Stable Extremal Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2451
segment_image_mser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2451
24.4 Region Growing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2456
expand_gray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2456
expand_gray_ref . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2458
regiongrowing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2460
regiongrowing_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2461
regiongrowing_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2462
24.5 Threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2467
auto_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2473
binary_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2474
char_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2475
check_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2477
dual_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2478
dyn_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2480
fast_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2482
histo_to_thresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2483
local_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2484
threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2486
threshold_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2487
var_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2488
zero_crossing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2493
zero_crossing_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2494
24.6 Topography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2495
critical_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2495
local_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2496
local_max_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2497
local_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2498
local_min_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2500
lowlands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2501
lowlands_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2502
plateaus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2503
plateaus_center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2504
pouring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2505
saddle_points_sub_pix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2507
watersheds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2508
watersheds_marker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2509
watersheds_threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2511
25 System 2513
25.1 Compute Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2513
activate_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2513
deactivate_all_compute_devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2514
deactivate_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2514
get_compute_device_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2515
get_compute_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2515
init_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2516
open_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2517
query_available_compute_devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2518
release_all_compute_devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2519
release_compute_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2519
set_compute_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2520
25.2 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
count_relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2521
get_modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2523
reset_obj_db . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2523
25.3 Encrypted Item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2524
read_encrypted_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2524
write_encrypted_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2525
25.4 Error Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
get_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
get_error_text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2526
get_extended_error_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2527
get_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528
query_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528
set_check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2529
set_spy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2530
25.5 I/O Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2532
close_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2532
close_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2533
control_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2533
control_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2534
control_io_interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2534
get_io_channel_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2535
get_io_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2536
open_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2537
open_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2538
query_io_device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2539
query_io_interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2540
read_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2541
set_io_channel_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2542
set_io_device_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2542
write_io_channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2543
25.6 Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2544
get_chapter_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2544
get_keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2545
get_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2545
get_operator_name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2547
get_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2547
get_param_names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2549
get_param_num . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2550
get_param_types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2551
query_operator_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2552
query_param_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2552
search_operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2553
25.7 Memory Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2553
compare_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2553
create_memory_block_extern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2554
create_memory_block_extern_copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2555
get_memory_block_ptr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2556
read_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2556
write_memory_block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2557
25.8 Multithreading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2558
broadcast_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2558
clear_barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2558
clear_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2559
clear_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2559
clear_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2560
clear_message_queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2561
clear_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2562
create_barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2562
create_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2563
create_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2564
create_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2565
create_message_queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2566
create_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2567
dequeue_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2568
enqueue_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2569
get_current_hthread_id . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2570
get_message_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2571
get_message_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2572
get_message_queue_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2573
get_message_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2574
get_threading_attrib . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2575
interrupt_operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2576
lock_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2577
read_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2578
set_message_obj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2578
set_message_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2579
set_message_queue_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2581
set_message_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2582
signal_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2583
signal_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2584
timed_wait_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2584
try_lock_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2585
try_wait_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2586
unlock_mutex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2586
wait_barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2587
wait_condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2587
wait_event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2588
write_message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2588
25.9 Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2589
count_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2589
get_system_time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2590
system_call . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2591
wait_seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2591
25.10 Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2592
get_aop_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2592
optimize_aop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2593
query_aop_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2595
read_aop_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2596
set_aop_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2597
write_aop_knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2599
25.11 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2600
get_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2600
get_system_info . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2604
set_operator_timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2605
set_system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2606
25.12 Serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2620
clear_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2620
close_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2621
get_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2621
open_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2622
read_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2623
set_serial_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2623
write_serial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
25.13 Serialized Item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
clear_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2625
create_serialized_item_ptr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2626
decrypt_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2627
encrypt_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2628
fread_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2628
fwrite_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2629
get_serialized_item_ptr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2630
25.14 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2630
close_socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2630
get_next_socket_data_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2631
get_socket_descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2631
get_socket_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2632
open_socket_accept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2633
open_socket_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2635
receive_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2636
receive_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2637
receive_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2638
receive_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2638
receive_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2639
receive_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2639
send_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2640
send_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2641
send_region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2642
send_serialized_item . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2642
send_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2643
send_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2644
set_socket_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2645
socket_accept_connect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2645
26 Tools 2647
26.1 Background Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2647
close_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2647
create_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2648
get_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2650
give_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2652
run_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2653
set_bg_esti_params . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2654
update_bg_esti . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2656
26.2 Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2657
abs_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2657
compose_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2658
create_funct_1d_array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2658
create_funct_1d_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2659
derivate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2660
funct_1d_to_pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2661
get_pair_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2661
get_y_value_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2661
integrate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2662
invert_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2663
local_min_max_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2663
match_funct_1d_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2664
negate_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2665
num_points_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666
read_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666
sample_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666
scale_y_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2667
smooth_funct_1d_gauss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2668
smooth_funct_1d_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2668
transform_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2669
write_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2670
x_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2670
y_range_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2671
zero_crossings_funct_1d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2671
26.3 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2672
angle_ll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2672
angle_lx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2673
apply_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2674
area_intersection_rectangle2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2675
clear_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2676
create_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2677
deserialize_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2679
distance_cc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2679
distance_cc_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2680
distance_cc_min_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2681
distance_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2682
distance_lc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2684
distance_lr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2684
distance_pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2685
distance_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2686
distance_point_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2687
distance_point_pluecker_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2688
distance_pp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2689
distance_pr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2690
distance_ps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2691
distance_rr_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2692
distance_rr_min_dil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2693
distance_sc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2693
distance_sl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2694
distance_sr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2695
distance_ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2696
get_distance_transform_xld_contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2698
get_distance_transform_xld_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2698
get_points_ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2699
intersection_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2700
intersection_circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2701
intersection_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2703
intersection_line_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2704
intersection_line_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2705
intersection_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2705
intersection_segment_circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2706
intersection_segment_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2707
intersection_segment_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2708
intersection_segments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2709
pluecker_line_to_point_direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2710
pluecker_line_to_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2711
point_direction_to_pluecker_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2712
points_to_pluecker_line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2713
projection_pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2714
read_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2715
serialize_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2716
set_distance_transform_xld_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2717
write_distance_transform_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2718
26.4 Grid Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2718
connect_grid_points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2718
create_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2720
find_rectification_grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2720
gen_arbitrary_distortion_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2721
gen_grid_rectification_map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2723
26.5 Hough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2724
hough_circle_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2724
hough_circles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2725
hough_line_trans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2726
hough_line_trans_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2727
hough_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2728
hough_lines_dir . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2729
select_matching_lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2731
26.6 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2732
clear_scattered_data_interpolator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2732
create_scattered_data_interpolator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2733
interpolate_scattered_data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2734
interpolate_scattered_data_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2734
interpolate_scattered_data_points_to_image . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2736
26.7 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2737
line_orientation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2737
line_position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2738
26.8 Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2739
adjust_mosaic_images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2739
bundle_adjust_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2742
gen_bundle_adjusted_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2745
gen_cube_map_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2746
gen_projective_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2748
gen_spherical_mosaic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2750
proj_match_points_distortion_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2752
proj_match_points_distortion_ransac_guided . . . . . . . . . . . . . . . . . . . . . . . . . . . 2756
proj_match_points_ransac . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2760
proj_match_points_ransac_guided . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2762
27 Transformations 2767
27.1 2D Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2767
affine_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2771
affine_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2773
deserialize_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2774
hom_mat2d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2774
hom_mat2d_determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2775
hom_mat2d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2776
hom_mat2d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2777
hom_mat2d_reflect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2777
hom_mat2d_reflect_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2779
hom_mat2d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2780
hom_mat2d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2781
hom_mat2d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2783
hom_mat2d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2784
hom_mat2d_slant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2785
hom_mat2d_slant_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2787
hom_mat2d_to_affine_par . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2788
hom_mat2d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2789
hom_mat2d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2791
hom_mat2d_transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2792
hom_mat3d_project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2792
hom_vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2794
point_line_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2796
projective_trans_pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2800
projective_trans_point_2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2801
serialize_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2802
vector_angle_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2802
vector_field_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2804
vector_to_aniso . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2804
vector_to_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2805
vector_to_proj_hom_mat2d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2807
vector_to_proj_hom_mat2d_distortion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2809
vector_to_rigid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2811
vector_to_similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2812
27.2 3D Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2813
affine_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2813
deserialize_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2815
hom_mat3d_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2815
hom_mat3d_determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2816
hom_mat3d_identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817
hom_mat3d_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2817
hom_mat3d_rotate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2818
hom_mat3d_rotate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2820
hom_mat3d_scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2822
hom_mat3d_scale_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2823
hom_mat3d_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2825
hom_mat3d_translate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2826
hom_mat3d_translate_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2827
hom_mat3d_transpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2828
point_pluecker_line_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2829
pose_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2830
projective_trans_hom_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2831
projective_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2832
serialize_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2833
vector_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2834
27.3 Dual Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2835
deserialize_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2835
dual_quat_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2836
dual_quat_conjugate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2837
dual_quat_interpolate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2838
dual_quat_normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2839
dual_quat_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2839
dual_quat_to_screw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2840
dual_quat_trans_line_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2841
dual_quat_trans_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2843
screw_to_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2844
serialize_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2845
27.4 Misc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2846
convert_point_3d_cart_to_spher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2846
convert_point_3d_spher_to_cart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2847
27.5 Poses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2849
convert_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2849
create_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2850
deserialize_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2854
dual_quat_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
get_circle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2855
get_pose_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2857
get_rectangle_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2857
pose_average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2861
pose_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2862
pose_invert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2862
pose_to_dual_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2863
pose_to_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2864
proj_hom_mat2d_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2864
quat_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2865
read_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2866
serialize_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2867
set_origin_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2867
vector_to_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2868
write_pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2871
27.6 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2873
axis_angle_to_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2873
deserialize_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2874
quat_compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2874
quat_conjugate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2875
quat_interpolate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2875
quat_normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2876
quat_rotate_point_3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2877
quat_to_hom_mat3d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2878
serialize_quat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2878
28 Tuple 2881
28.1 Arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2881
tuple_abs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2881
tuple_acos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2881
tuple_acosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2882
tuple_add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2883
tuple_asin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2883
tuple_asinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2884
tuple_atan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2885
tuple_atan2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2885
tuple_atanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2886
tuple_cbrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2887
tuple_ceil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2887
tuple_cos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2888
tuple_cosh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2888
tuple_cumul . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2889
tuple_deg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2890
tuple_div . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2890
tuple_erf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2891
tuple_erfc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2891
tuple_exp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2892
tuple_exp10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2893
tuple_exp2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2893
tuple_fabs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2894
tuple_floor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2894
tuple_fmod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2895
tuple_hypot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2896
tuple_ldexp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2896
tuple_lgamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2897
tuple_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2898
tuple_log10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2898
tuple_log2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2899
tuple_max2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2900
tuple_min2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2900
tuple_mod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2901
tuple_mult . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2902
tuple_neg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2902
tuple_pow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2903
tuple_rad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2903
tuple_sgn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2904
tuple_sin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2905
tuple_sinh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2905
tuple_sqrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2906
tuple_sub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2906
tuple_tan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2907
tuple_tanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2908
tuple_tgamma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2908
28.2 Bit Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2909
tuple_band . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2909
tuple_bnot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2910
tuple_bor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2910
tuple_bxor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2911
tuple_lsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2912
tuple_rsh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2912
28.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2913
tuple_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2913
tuple_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2914
tuple_greater . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2914
tuple_greater_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2915
tuple_greater_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2916
tuple_greater_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2916
tuple_less . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2917
tuple_less_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2918
tuple_less_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2919
tuple_less_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2919
tuple_not_equal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2920
tuple_not_equal_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2921
28.4 Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2922
handle_to_integer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2922
integer_to_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2922
tuple_chr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2923
tuple_chrt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2924
tuple_int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2925
tuple_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2925
tuple_ord . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2926
tuple_ords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2927
tuple_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2928
tuple_round . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2928
tuple_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2929
28.5 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2931
clear_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2931
tuple_concat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2932
tuple_constant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2933
tuple_gen_const . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2933
tuple_gen_sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2934
tuple_rand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2935
tuple_repeat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2935
tuple_repeat_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2936
28.6 Data Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2937
copy_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2937
create_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2938
dict_to_json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2939
get_dict_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2940
get_dict_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2941
get_dict_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2942
json_to_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2944
read_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2944
remove_dict_key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2946
set_dict_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2947
set_dict_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2948
set_dict_tuple_at . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2949
write_dict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2951
28.7 Element Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2952
tuple_inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2952
tuple_sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2953
tuple_sort_index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2953
28.8 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2954
get_handle_object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2954
get_handle_param . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2955
get_handle_tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2955
tuple_deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2956
tuple_histo_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2957
tuple_length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2958
tuple_max . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2959
tuple_mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2959
tuple_median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2960
tuple_min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2960
tuple_sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2961
28.9 Logical Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2962
tuple_and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2962
tuple_not . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2962
tuple_or . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2963
tuple_xor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
28.10 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
tuple_insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2964
tuple_remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2965
tuple_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2966
28.11 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967
tuple_find . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2967
tuple_find_first . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2968
tuple_find_last . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2968
tuple_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2969
tuple_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2970
tuple_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2970
tuple_select_mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2971
tuple_select_range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2972
tuple_select_rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2973
tuple_str_bit_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2973
tuple_uniq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2974
28.12 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2975
tuple_difference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2975
tuple_intersection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2976
tuple_symmdiff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2977
tuple_union . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2977
28.13 String Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2978
tuple_environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
tuple_join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979
tuple_regexp_match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2980
tuple_regexp_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2983
tuple_regexp_select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2984
tuple_regexp_test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2985
tuple_split . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2986
tuple_str_distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2987
tuple_str_first_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2988
tuple_str_last_n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2989
tuple_str_replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2990
tuple_strchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2990
tuple_strlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2991
tuple_strrchr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2992
tuple_strrstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2993
tuple_strstr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2994
tuple_substr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2995
28.14 Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2996
tuple_is_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2996
tuple_is_handle_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2997
tuple_is_int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2997
tuple_is_int_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2998
tuple_is_mixed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2999
tuple_is_nan_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3000
tuple_is_number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3000
tuple_is_real . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3001
tuple_is_real_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3002
tuple_is_string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3003
tuple_is_string_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3004
tuple_is_valid_handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3005
tuple_sem_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3005
tuple_sem_type_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3006
tuple_type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3007
tuple_type_elem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3008
29 XLD 3011
29.1 Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
get_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
get_lines_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011
get_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3012
get_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3013
29.2 Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3014
gen_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3014
gen_contour_nurbs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3015
gen_contour_polygon_rounded_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3017
gen_contour_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3018
gen_contour_region_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3018
gen_contours_skeleton_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3020
gen_cross_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3021
gen_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3021
gen_nurbs_interp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3023
gen_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3024
gen_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3025
gen_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3026
mod_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3027
29.3 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3028
area_center_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3028
area_center_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3029
circularity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3030
compactness_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3031
contour_point_num_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3032
convexity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3032
diameter_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3033
dist_ellipse_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3034
dist_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3035
dist_rectangle2_contour_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3037
eccentricity_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3038
eccentricity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3039
elliptic_axis_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3040
elliptic_axis_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3041
fit_circle_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3042
fit_ellipse_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3044
fit_line_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3047
fit_rectangle2_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3049
get_contour_angle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3052
get_contour_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3052
get_contour_global_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3056
get_regress_params_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3059
height_width_ratio_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3061
info_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3062
length_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3062
local_max_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3063
max_parallels_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3064
moments_any_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3064
moments_any_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3066
moments_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3068
moments_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3068
orientation_points_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3069
orientation_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3070
query_contour_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3071
query_contour_global_attribs_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3072
rectangularity_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3072
select_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3073
select_shape_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3074
select_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3077
smallest_circle_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3077
smallest_rectangle1_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3078
smallest_rectangle2_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3079
test_closed_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3080
test_self_intersection_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3081
test_xld_point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3081
29.4 Geometric Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3082
affine_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3082
affine_trans_polygon_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3083
gen_parallel_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3084
polar_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3085
polar_trans_contour_xld_inv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3087
projective_trans_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3089
29.5 Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3090
difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3090
difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3091
intersection_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3092
intersection_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3093
intersection_region_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3094
symm_difference_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3095
symm_difference_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3096
union2_closed_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3097
union2_closed_polygons_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3098
29.6 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3100
add_noise_white_contour_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3100
clip_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3101
clip_end_points_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3101
close_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3102
combine_roads_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3103
crop_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3104
merge_cont_line_scan_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3105
regress_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3106
segment_contour_attrib_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3107
segment_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3109
shape_trans_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3111
smooth_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112
sort_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112
split_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3113
union_adjacent_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3114
union_cocircular_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3116
union_collinear_contours_ext_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3119
union_collinear_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3121
union_cotangential_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3125
union_straight_contours_xld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3129
Index 3131
Chapter 1
1D Measuring
(1) (2)
Measure edges and the distances between them along a line (1) or along an arc (2). These images are from the
example programs fuzzy_measure_pin.hdev and measure_ring.hdev.
In the following, the steps that are required to use 1D measuring are described briefly.
Generate measure object: First, a measure object must be generated that describes the region of interest for the
measurement. If the measurement should be performed along a line, the measure object is defined by a
rectangle. If it should be performed along an arc, the measure object is defined as an annular arc. The
measure objects are generated by the operators
• gen_measure_rectangle2 or
• gen_measure_arc.
Note that you can use shape-based matching (see chapter Matching / Shape-Based) to automatically align
the measure objects.
Perform the measurement: Then, the actual measurement is performed. For this, typically one of the following
operators is used:
1
2 CHAPTER 1 1D MEASURING
• measure_pos extracts straight edges perpendicular to the main axis of the measure object and returns
the positions of the edge centers, the edge amplitudes, and the distances between consecutive edges.
• measure_pairs extracts straight edge pairs perpendicular to the main axis of the measure object
and returns the positions of the edge centers of the edge pairs, the edge amplitudes for the edge pairs,
the distances between the edges of an edge pair, and the distances between consecutive edge pairs.
• measure_thresh extracts points with a particular gray value along the main axis of the measure
object and returns their positions and the distances between consecutive points.
Alternatively, if there are extra edges that do not belong to the measurement, fuzzy measuring can be ap-
plied. Here, so-called fuzzy rules, which describe the features of good edges, must be defined. Possible
features are, e.g., the position, the distance, the gray values, or the amplitude of edges. These functions
are created with create_funct_1d_pairs and passed to the tool with set_fuzzy_measure or
set_fuzzy_measure_norm_pair. Then, based on these rules, one of the following operators will
extract the most appropriate edges:
• fuzzy_measure_pos extracts straight edges perpendicular to the main axis of the measure object
and returns the positions of the edge centers, the edge amplitudes, the fuzzy scores, and the distances
between consecutive edges.
• fuzzy_measure_pairs extracts straight edge pairs perpendicular to the main axis of the measure
object and returns the positions of the first and second edges of the edge pairs, the edge amplitudes for
the edge pairs, the positions of the centers of the edge pairs, the fuzzy scores, the distances between
the edges of an edge pair, and the distances between consecutive edge pairs.
• fuzzy_measure_pairing is similar to fuzzy_measure_pairs with the exception that it is
also possible to extract interleaving and included pairs using the parameter Pairing.
Alternatively to the automatic extraction of edges or points within the measure object, you can also extract a
one-dimensional gray value profile perpendicular to the rectangle or annular arc and evaluate this gray value
information according to your needs. The gray value profile within the measure object can be extracted with
the operator
• measure_projection.
Destroy measure object handle: When you no longer need the measure object, you destroy it by passing the
handle to
• close_measure.
Further operators
In addition to the operators mentioned above, you can use reset_fuzzy_measure to discard a fuzzy func-
tion of a fuzzy set that was set via set_fuzzy_measure or set_fuzzy_measure_norm_pair be-
fore, translate_measure to translate the reference point of the measure object to a specified position,
write_measure and read_measure to write the measure object to file and read it from file again, and
serialize_measure and deserialize_measure to serialize and deserialize the measure object.
Glossary
In the following, the most important terms that are used in the context of 1D Measuring are described.
measure object A data structure that contains a specific region of interest that is prepared for the extraction of
straight edges which lie perpendicular to the major axis of a rectangle or an annular arc.
annular arc A circular arc with an associated width.
Further Information
See also the “Solution Guide Basics” and “Solution Guide on 1D Measuring” for further de-
tails about 1D Measuring.
Learn about 1D Measuring and many other topics in interactive online courses at our MVTec Academy .
close_measure ( : : MeasureHandle : )
HALCON 24.11.1.0
4 CHAPTER 1 1D MEASURING
HALCON 24.11.1.0
6 CHAPTER 1 1D MEASURING
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairs, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of Gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ AmpThresh ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum fuzzy value.
Default: 0.5
Suggested values: FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.7, 0.9}
Value range: 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended increment: 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Select the first gray value transition of the edge pairs.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’}
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the first edge point.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the first edge point.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the second edge point.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the second edge point.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the second edge (with sign).
. RowEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the edge pair.
. ColumnEdgeCenter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the edge pair.
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Fuzzy evaluation of the edge pair.
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between edges of an edge pair.
. InterDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edge pairs.
Result
If the parameter values are correct the operator fuzzy_measure_pairs returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information
HALCON 24.11.1.0
8 CHAPTER 1 1D MEASURING
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc, set_fuzzy_measure
Possible Successors
close_measure
Alternatives
edges_sub_pix, fuzzy_measure_pairing, measure_pairs
See also
fuzzy_measure_pos, measure_pos
Module
1D Metrology
to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1
see gen_measure_rectangle2).
It should be kept in mind that fuzzy_measure_pos ignores the domain of Image for efficiency reasons. If
certain regions in the image should be excluded from the measurement a new measure object with appropriately
modified parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of Gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. AmpThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: AmpThresh ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ AmpThresh ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. FuzzyThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum fuzzy value.
Default: 0.5
Suggested values: FuzzyThresh ∈ {0.1, 0.3, 0.5, 0.6, 0.7, 0.9}
Value range: 0.0 ≤ FuzzyThresh ≤ 1.0 (lin)
Recommended increment: 0.1
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Select light/dark or dark/light edges.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’}
. RowEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the edge point.
. ColumnEdge (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the edge point.
. Amplitude (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the edge (with sign).
. FuzzyScore (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Fuzzy evaluation of the edges.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edges.
Result
If the parameter values are correct the operator fuzzy_measure_pos returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information
HALCON 24.11.1.0
10 CHAPTER 1 1D MEASURING
Possible Successors
close_measure
Alternatives
edges_sub_pix, measure_pos
See also
fuzzy_measure_pairing, fuzzy_measure_pairs, measure_pairs
Module
1D Metrology
HALCON 24.11.1.0
12 CHAPTER 1 1D MEASURING
Please also note that the center coordinates of the rectangle are rounded internally, so that the center lies on the
pixel grid. This is done to ensure consistency.
Parameters
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y ; real / integer
Row coordinate of the center of the rectangle.
Default: 300.0
Suggested values: Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Row ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x ; real / integer
Column coordinate of the center of the rectangle.
Default: 200.0
Suggested values: Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Column ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad ; real / integer
Angle of longitudinal axis of the rectangle to horizontal (radians).
Default: 0.0
Suggested values: Phi ∈ {-1.178097, -0.785398, -0.392699, 0.0, 0.392699, 0.785398, 1.178097}
(lin)
Minimum increment: 0.001
Recommended increment: 0.1
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth ; real / integer
Half width of the rectangle.
Default: 100.0
Suggested values: Length1 ∈ {3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0, 300.0, 500.0}
Value range: 1.0 ≤ Length1 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight ; real / integer
Half height of the rectangle.
Default: 20.0
Suggested values: Length2 ∈ {1.0, 2.0, 3.0, 5.0, 10.0, 15.0, 20.0, 50.0, 100.0, 200.0}
Value range: 0.0 ≤ Length2 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the image to be processed subsequently.
Default: 512
Suggested values: Width ∈ {128, 160, 192, 256, 320, 384, 512, 640, 768}
Value range: 0 ≤ Width (lin)
Minimum increment: 1
Recommended increment: 16
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the image to be processed subsequently.
Default: 512
Suggested values: Height ∈ {120, 128, 144, 240, 256, 288, 480, 512, 576}
Value range: 0 ≤ Height (lin)
Minimum increment: 1
Recommended increment: 16
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of interpolation to be used.
Default: ’nearest_neighbor’
List of values: Interpolation ∈ {’nearest_neighbor’, ’bilinear’, ’bicubic’}
HALCON 24.11.1.0
14 CHAPTER 1 1D MEASURING
get_measure_param ( : : MeasureHandle,
GenParamName : GenParamValue )
• ’type’: Type of the measure object, either ’rectangle2’ if the object was created with
gen_measure_rectangle2, or ’arc’ if it was created with gen_measure_arc.
• ’image_width’, ’image_height’: Image width and height, respectively, for which the measure object was
created.
• ’interpolation’: Used interpolation mode: ’nearest_neighbor’, ’bilinear’ or ’bicubic’.
Properties for rectangular measure objects
Properties for measure objects that were created with gen_measure_rectangle2.
• ’row’, ’column’: Row and column, respectively, of the center of the measurement rectangle.
• ’phi’: Rotation angle of the measurement rectangle.
• ’length1’, ’length2’: Side lengths of the measurement rectangle.
Properties for annular-shaped measure objects
Properties for measure objects that were created with gen_measure_arc.
• ’row’, ’column’: Row and column, respectively, of the center of the annular arc.
• ’radius’: Radius of the annular arc.
Parameters
Possible Predecessors
gen_measure_rectangle2, gen_measure_arc
See also
gen_measure_rectangle2, gen_measure_arc, translate_measure
Module
1D Metrology
HALCON 24.11.1.0
16 CHAPTER 1 1D MEASURING
measure_pairs serves to extract straight edge pairs which lie perpendicular to the major axis of a rectangle or
annular arc.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
The extraction algorithm of measure_pairs is identical to measure_pos. In addition the edges are grouped
to pairs: If Transition = ’positive’, the edge points with a dark-to-light transition in the direction of the ma-
jor axis of the rectangle are returned in RowEdgeFirst and ColumnEdgeFirst. In this case, the corre-
sponding edges with a light-to-dark transition are returned in RowEdgeSecond and ColumnEdgeSecond. If
Transition = ’negative’, the behavior is exactly opposite. If Transition = ’all’, the first detected edge
defines the transition for RowEdgeFirst and ColumnEdgeFirst. I.e., dependent on the positioning of the
measure object, edge pairs with a light-dark-light transition or edge pairs with a dark-light-dark transition are
returned. This is suited, e.g., to measure objects with different brightness relative to the background.
If more than one consecutive edge with the same transition is found, the first one is used as a pair element. This
behavior may cause problems in applications in which the threshold Threshold cannot be selected high enough
to suppress consecutive edges of the same transition. For these applications, a second pairing mode exists that only
selects the respective strongest edges of a sequence of consecutive rising and falling edges. This mode is selected
by appending ’_strongest’ to any of the above modes for Transition, e.g., ’negative_strongest’. Finally, it is
possible to select which edge pairs are returned. If Select is set to ’all’, all edge pairs are returned. If it is set to
’first’, only the first of the extracted edge pairs is returned, while it is set to ’last’, only the last one is returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle. The corresponding
edge amplitudes are returned in AmplitudeFirst and AmplitudeSecond. In addition, the distance between
each edge pair is returned in IntraDistance and the distance between consecutive edge pairs is returned
in InterDistance. Here, IntraDistance[i] corresponds to the distance between EdgeFirst[i] and EdgeSec-
ond[i], while InterDistance[i] corresponds to the distance between EdgeSecond[i] and EdgeFirst[i+1], i.e., the
tuple InterDistance contains one element less than the tuples of the edge pairs.
Attention
measure_pairs only returns meaningful results if the assumptions that the edges are straight and perpendicular
to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_pairs ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ Threshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of gray value transition that determines how edges are grouped to edge pairs.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’, ’all_strongest’, ’positive_strongest’,
’negative_strongest’}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of edge pairs.
Default: ’all’
List of values: Select ∈ {’all’, ’first’, ’last’}
. RowEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the first edge.
. ColumnEdgeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the first edge.
. AmplitudeFirst (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the first edge (with sign).
. RowEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinate of the center of the second edge.
. ColumnEdgeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinate of the center of the second edge.
. AmplitudeSecond (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Edge amplitude of the second edge (with sign).
. IntraDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between edges of an edge pair.
. InterDistance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive edge pairs.
Result
If the parameter values are correct the operator measure_pairs returns the value 2 (H_MSG_TRUE). Otherwise
an exception is raised.
Execution Information
HALCON 24.11.1.0
18 CHAPTER 1 1D MEASURING
these calculations only once, thus increasing the speed of measure_pos significantly. Since there is a trade-off
between accuracy and speed in the subpixel calculations of the gray values, and thus in the accuracy of the ex-
tracted edge positions, different interpolation schemes can be selected in gen_measure_rectangle2. (The
interpolation only influences rectangles not aligned with the image axes.) The measure object generated with
gen_measure_rectangle2 is passed in MeasureHandle.
After the one-dimensional edge profile has been calculated, subpixel edge locations are computed by convolving
the profile with the derivatives of a Gaussian smoothing kernel of standard deviation Sigma. Salient edges can be
selected with the parameter Threshold, which constitutes a threshold on the amplitude values (Amplitude),
i.e., the absolute
√ value of the first derivative of the edge. Note that the amplitude values are scaled by the factor
Sigma · 2π. Additionally, it is possible to select only positive edges, i.e., edges which constitute a dark-to-light
transition in the direction of the major axis of the rectangle or the arc (Transition = ’positive’), only negative
edges, i.e., light-to-dark transitions (Transition = ’negative’), or both types of edges (Transition = ’all’).
Finally, it is possible to select which edge points are returned. If Select is set to ’all’, all edge points are returned.
If it is set to ’first’, only the first of the extracted edge points is returned, while it is set to ’last’, only the last one is
returned.
The extracted edges are returned as single points which lie on the major axis of the rectangle or arc in (RowEdge,
ColumnEdge). The corresponding edge amplitudes are returned in Amplitude. In addition, the distance
between consecutive edge points is returned in Distance. Here, Distance[i] corresponds to the distance be-
tween Edge[i] and Edge[i+1], i.e., the tuple Distance contains one element less than the tuples RowEdge and
ColumnEdge.
Attention
measure_pos only returns meaningful results if the assumptions that the edges are straight and perpendicular
to the major axis of the rectangle or arc are fulfilled. Thus, it should not be used to extract edges from curved
objects, for example. Furthermore, the user should ensure that the rectangle or arc is as close to perpendicular as
possible to the edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1
(for Length1 see gen_measure_rectangle2).
It should be kept in mind that measure_pos ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum edge amplitude.
Default: 30.0
Suggested values: Threshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ Threshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. Transition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Light/dark or dark/light edge.
Default: ’all’
List of values: Transition ∈ {’all’, ’positive’, ’negative’}
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of end points.
Default: ’all’
List of values: Select ∈ {’all’, ’first’, ’last’}
HALCON 24.11.1.0
20 CHAPTER 1 1D MEASURING
Result
If the parameter values are correct the operator measure_projection returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information
Extracting points with a particular gray value along a rectangle or an annular arc.
measure_thresh extracts points for which the gray value within an one-dimensional gray value profile is equal
to the specified threshold Threshold. The gray value profile is projected onto the major axis of the measure
rectangle which is passed with the parameter MeasureHandle, so the threshold points calculated within the
gray value profile correspond to certain image coordinates on the rectangle’s major axis. These coordinates are
returned as the operator results in RowThresh and ColumnThresh.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
If the gray value profile intersects the threshold line for several times, the parameter Select determines which
values to return. Possible settings are ’first’, ’last’, ’first_last’ (first and last) or ’all’. For the last two cases
Distance returns the distances between the calculated points.
The gray value profile is created by averaging the gray values along all line segments, which are defined by the
measure rectangle as follows:
For every line segment, the average of the gray values of all points with an integer distance to the major axis is
calculated. Due to translation and rotation of the measure rectangle with respect to the image coordinates the input
image Image is in general sampled at subpixel positions.
Since this involves some calculations which can be used repeatedly in several projections, the operator
gen_measure_rectangle2 is used to perform these calculations only once in advance. Here, the measure
object MeasureHandle is generated and different interpolation schemes can be selected.
Attention
measure_thresh only returns meaningful results if the assumptions that the edges are straight and perpendicu-
lar to the major axis of the rectangle are fulfilled. Thus, it should not be used to extract edges from curved objects,
for example. Furthermore, the user should ensure that the rectangle is as close to perpendicular as possible to the
edges in the image. Additionally, Sigma must not become larger than approx. 0.5 * Length1 (for Length1 see
gen_measure_rectangle2).
It should be kept in mind that measure_thresh ignores the domain of Image for efficiency reasons. If certain
regions in the image should be excluded from the measurement a new measure object with appropriately modified
parameters should be generated.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Sigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Sigma of gaussian smoothing.
Default: 1.0
Suggested values: Sigma ∈ {0.0, 0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.0 ≤ Sigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Threshold.
Default: 128.0
Value range: 0 ≤ Threshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 1
. Select (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of points.
Default: ’all’
List of values: Select ∈ {’all’, ’first’, ’last’, ’first_last’}
. RowThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real
Row coordinates of points with threshold value.
. ColumnThresh (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real
Column coordinates of points with threshold value.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Distance between consecutive points.
Result
If the parameter values are correct the operator measure_thresh returns the value 2 (H_MSG_TRUE). Other-
wise, an exception is raised.
Execution Information
HALCON 24.11.1.0
22 CHAPTER 1 1D MEASURING
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
File extension: .msr
. MeasureHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .measure ; handle
Measure object handle.
Result
If the parameters are valid, the operator read_measure returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information
Possible Predecessors
set_fuzzy_measure
Possible Successors
fuzzy_measure_pos, fuzzy_measure_pairs
See also
set_fuzzy_measure, set_fuzzy_measure_norm_pair
Module
1D Metrology
HALCON 24.11.1.0
24 CHAPTER 1 1D MEASURING
set_fuzzy_measure specifies a fuzzy function passed in Function. The specified fuzzy functions enable
fuzzy_measure_pos and fuzzy_measure_pairs / fuzzy_measure_pairing to evaluate and select
the detected edge candidates. For this purpose, weighting characteristics for different edge features can be defined
by one function each. Such a specified feature is called fuzzy set. Specifying no function for a fuzzy set means not
to use this feature for the final edge evaluation. Setting a second fuzzy function to a set means to discard the first
defined function and replace it by the second one. A previously defined fuzzy function can be discarded completely
by reset_fuzzy_measure.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Functions for five different fuzzy set types selected by the SetType parameter can be defined, the sub types of a
set being mutual exclusive:
• ’contrast’ will use the fuzzy function to evaluate the amplitudes of the edge candidates. When extracting
edge pairs, the fuzzy evaluation is obtained by the geometric average of the fuzzy contrast scores of both
edges.
• The fuzzy function of ’position’ evaluates the distance of each edge candidate to the reference point of
the measure object, generated by gen_measure_arc or gen_measure_rectangle2. The reference
point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference point to the
middle or the end of the one-dimensional gray value profile instead. If the fuzzy position evaluation depends
on the position of the object along the profile, ’position_first_edge’ / ’position_last_edge’ sets the reference
point at the position of the first/last extracted edge. When extracting edge pairs the position of a pair is
referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the distance of each edge pair to the reference point
of the measure object. The position of a pair is defined by the center point between both
edges. The object’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’posi-
tion_first_pair’, ’position_last_pair’, respectively. Contrary to ’position’, this set is only used by
fuzzy_measure_pairs/fuzzy_measure_pairing.
• ’size’ denotes a fuzzy set that evaluates the normed distance of the two edges of a pair in pixels.
This set is only used by fuzzy_measure_pairs/fuzzy_measure_pairing. Specifying an up-
per bound for the size by terminating the function with a corresponding fuzzy value of 0.0 will speed up
fuzzy_measure_pairs / fuzzy_measure_pairing because not all possible pairs need to be con-
sidered.
• ’gray’ sets a fuzzy function to weight the mean projected gray value between two edges of a pair. This set is
only used by fuzzy_measure_pairs / fuzzy_measure_pairing.
A fuzzy function is defined as a piecewise linear function by at least two pairs of values, sorted in an ascending
order by their x value. The x values represent the edge feature and must lie within the parameter space of the set
type, i.e., in case of ’contrast’ and ’gray’ feature and, e.g., byte images within the range 0.0 ≤ x ≤ 255.0. In
case of ’size’ x has to satisfy 0.0 ≤ x whereas in case of ’position’ x can be any real number. The y values of the
fuzzy function represent the weight of the corresponding feature value and have to satisfy the range of 0.0 ≤ y ≤
1.0. Outside of the function’s interval, defined by the smallest and the greatest x value, the y values of the interval
borders are continued constantly. Such Fuzzy functions can be generated by create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric middle of the weights of
each set.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. SetType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selection of the fuzzy set.
Default: ’contrast’
List of values: SetType ∈ {’position’, ’position_center’, ’position_end’, ’position_first_edge’,
’position_last_edge’, ’position_pair_center’, ’position_pair_end’, ’position_first_pair’, ’position_last_pair’,
’size’, ’gray’, ’contrast’}
. Function (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . function_1d ; real / integer
Fuzzy function.
Example
Execution Information
HALCON 24.11.1.0
26 CHAPTER 1 1D MEASURING
usage of the defined functions. A previously defined normalized fuzzy function can be discarded completely by
reset_fuzzy_measure.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Functions for three different fuzzy set types selected by the SetType parameter can be defined, the sub types of
a set being mutual exclusive:
• ’size’ denotes a fuzzy set that valuates the normalized distance of two edges of a pair in pixels:
d
x= (x ≥ 0) .
s
.
Specifying an upper bound xmax for the size by terminating the function with a corresponding fuzzy value
of 0.0 will speed up fuzzy_measure_pairs / fuzzy_measure_pairing because not all possible
pairs must be considered. Additionally, this fuzzy set can also be specified as a normalized size difference by
’size_diff’
s−d
x= (x ≤ 1)
s
and a absolute normalized size difference by ’size_abs_diff’
|s − d|
x= (0 ≤ x ≤ 1) .
s
.
• The fuzzy function of ’position’ evaluates the signed distance p of each edge candidate to the reference point
of the measure object, generated by gen_measure_arc or gen_measure_rectangle2:
p
x= .
s
.
The reference point is located at the beginning whereas ’position_center’ or ’position_end’ sets the reference
point to the middle or the end of the one-dimensional gray value profile, instead. If the fuzzy position
valuation depends on the position of the object along the profile ’position_first_edge’ / ’position_last_edge’
sets the reference point at the position of the first/last extracted edge. When extracting edge pairs, the position
of a pair is referenced by the geometric average of the fuzzy position scores of both edges.
• Similar to ’position’, ’position_pair’ evaluates the signed distance of each edge pair to the refer-
ence point of the measure object. The position of a pair is defined by the center point between
both edges. The object’s reference can be set by ’position_pair_center’, ’position_pair_end’ and ’po-
sition_first_pair’, ’position_last_pair’, respectively. Contrary to ’position’, this set is only used by
fuzzy_measure_pairs/fuzzy_measure_pairing.
A normalized fuzzy function is defined as a piecewise linear function by at least two pairs of values, sorted in
an ascending order by their x value. The y values of the fuzzy function represent the weight of the corresponding
feature value and must satisfy the range of 0.0 ≤ y ≤ 1.0. Outside of the function’s interval, defined by the smallest
and the greatest x value, the y values of the interval borders are continued constantly. Such Fuzzy functions can be
generated by create_funct_1d_pairs.
If more than one set is defined, fuzzy_measure_pos / fuzzy_measure_pairs /
fuzzy_measure_pairing yield the overall fuzzy weighting by the geometric mean of the weights of
each set.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. PairSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Favored width of edge pairs.
Default: 10.0
Suggested values: PairSize ∈ {4.0, 6.0, 8.0, 10.0, 15.0, 20.0, 30.0}
Value range: 0.0 ≤ PairSize
Minimum increment: 0.1
Recommended increment: 1.0
Execution Information
HALCON 24.11.1.0
28 CHAPTER 1 1D MEASURING
the measure object is shifted to the new reference point in an efficient manner. Otherwise, the measure object
is generated anew with gen_measure_rectangle2 or gen_measure_arc using the parameters that were
specified when the measure object was created and the new reference point.
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y ; real / integer
Row coordinate of the new reference point.
Default: 50.0
Suggested values: Row ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Row ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x ; real / integer
Column coordinate of the new reference point.
Default: 100.0
Suggested values: Column ∈ {10.0, 20.0, 50.0, 100.0, 200.0, 300.0, 400.0, 500.0}
Value range: 0.0 ≤ Column ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
Result
If the parameter values are correct the operator translate_measure returns the value 2 (H_MSG_TRUE).
Otherwise an exception is raised.
Execution Information
For an explanation of the concept of 1D measuring see the introduction of chapter 1D Measuring.
Parameters
. MeasureHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . measure ; handle
Measure object handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .msr
Result
If the parameters are valid, the operator write_measure returns the value 2 (H_MSG_TRUE). If necessary, an
exception is raised.
Execution Information
HALCON 24.11.1.0
30 CHAPTER 1 1D MEASURING
2D Metrology
(1) (2)
The geometric shapes in (1) are measured using 2D Metrology (2): A metrology model with 4 metrology objects
(blue contours) is created. Using the edge positions (cyan crosses) located within the measure regions (gray
rectangles) for each metrology object, the geometric shapes (green contours) are fitted and their parameters can
be queried. As shown for the circles, more than one instance per object can be found. This image is from the
example program apply_metrology_model.hdev.
31
32 CHAPTER 2 2D METROLOGY
In the following, the steps that are required to use 2D metrology are described briefly.
Create the metrology model and specify the image size: First, a metrology model must be created using
• create_metrology_model.
The metrology model is used as a container for one or more metrology objects. For an efficient measurement,
after creating the metrology model, the image size of the image in which the measurements will be performed
should be specified using
• set_metrology_model_image_size.
Provide approximate values: Then, metrology objects are added to the metrology model. Each metrology object
consists of the approximate shape parameters for the corresponding object in the image and of the parameters
that control the measurement. The parameters that control the measurement comprise, e.g., parameters that
specify the dimension and distribution of the measure regions. Furthermore, several generic parameters can
be adjusted for each metrology object. The metrology objects are specified with
To visually inspect the defined metrology objects, you can access their XLD contours with the operator
get_metrology_object_model_contour. To visually inspect the created measure regions, you
can access their XLD contours with the operator get_metrology_object_measures.
Modify the model parameters: If a camera calibration has been performed, the camera parameters and the pose
of the measurement plane can be set with
• set_metrology_model_param.
• set_metrology_object_param.
Align the metrology model: To translate and rotate the metrology model before the next measurement is per-
formed, you can use the operator
• align_metrology_model.
An alignment is temporary and is replaced by the next alignment. The metrology model itself is not changed.
Note that typically the alignment parameters are obtained using shape-based matching.
Apply the measurement: The actual measurement in the image is performed with
• apply_metrology_model.
The operator locates the edges within the measure regions and fits the specified geometric shape to the edge
positions using a RANSAC algorithm. The edges are located internally using the operator measure_pos
or fuzzy_measure_pos (see also chapter 1D Measuring). The latter uses fuzzy methods and is used only
if at least one fuzzy function was set via set_metrology_object_fuzzy_param before applying the
measurement. If more than one instance of the returned object shape is needed (compare image above),
the generic parameter ’num_instances’ must be set to the number of instances that should be returned.
The parameter can be set when adding the individual metrology objects or afterwards with the operator
set_metrology_object_param.
Access the results: After the measurement, the results can be accessed. The parameters of the adapted geometric
shapes of the objects are queried with the operator
• get_metrology_object_result.
Querying only the edges used for the returned result and their amplitudes is also done using
get_metrology_object_result.
The row and column coordinates of all located edges can be accessed with
• get_metrology_object_measures.
To visualize the adapted geometric shapes, you can access their XLD contours with
• get_metrology_object_result_contour.
Further operators
In addition to the operators mentioned above, you can copy the metrology handle with
copy_metrology_model, write the metrology model to file with write_metrology_model, read
a model from file again using read_metrology_model, and serialize or deserialize a metrology model using
serialize_metrology_model or deserialize_metrology_model.
Furthermore, you can query various information from the metrology model. For example, you can query the indices
of the metrology objects with get_metrology_object_indices, query parameters that are valid for the
entire metrology model with get_metrology_model_param, query a fuzzy parameter of a metrology model
with get_metrology_object_fuzzy_param, query the number of instances of the metrology objects of a
metrology model with get_metrology_object_num_instances, and query the current configuration of
the metrology model with get_metrology_object_param.
Additionally, you can reset all parameters of a metrology model using reset_metrology_object_param
or reset only all fuzzy parameters and fuzzy functions of a metrology model using
reset_metrology_object_fuzzy_param.
Glossary
In the following, the most important terms that are used in the context of 2D Metrology are described.
metrology model Data structure that contains all metrology objects, all information needed for the measurement,
and the measurement results.
metrology object Data structure for the object to be measured with 2D metrology. The metrology object is repre-
sented by a specific geometric shape for which the shape parameters are approximately known. Additionally,
it contains parameters that control the measurement, e.g., parameters that specify the dimension and distri-
bution of the measure regions.
measure regions Rectangular regions that are arranged perpendicular to the boundaries of the approximate ob-
jects. Within these regions the edges that are used to get the exact shape parameters of the metrology objects
are extracted.
returned instance of a metrology object For each metrology object, different instances of the object can be re-
turned by the measurement, e.g., if parallel structures of the same shape exist near to the boundaries of the
approximated geometric shape (see image above). The sequence of the returned instances is arbitrary, i.e., it
is no measure for the quality of the fitting.
Further Information
See also the “Solution Guide on 2D Measuring” for further details about 2D metrology.
HALCON 24.11.1.0
34 CHAPTER 2 2D METROLOGY
’start_phi’: The parameter specifies the angle at the start point of a circular arc. To create a closed circle the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 0.0
’end_phi’: The parameter specifies the angle at the end point of a circular arc. To create a closed circle the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped internally automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 6.28318
’point_order’: The parameter specifies the direction of the circular arc. For the value ’positive’, the circular arc
is defined between ’start_phi’ and ’end_phi’ in mathematically positive direction (counterclockwise). For
the value ’negative’, the circular arc is defined between ’start_phi’ and ’end_phi’ in mathematically negative
direction (clockwise).
List of values: ’positive’, ’negative’
Default: ’positive’
Additionally all generic parameters, that are available for the operator set_metrology_object_param can
be set. But note that for a lot of applications the default values are sufficient and no adjustment is necessary.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.y(-array) ; real / integer
Row coordinate (or Y) of the center of the circle or circular arc.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.center.x(-array) ; real / integer
Column (or X) coordinate of the center of the circle or circular arc.
. Radius (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . circle.radius(-array) ; real / integer
Radius of the circle or circular arc.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1
Minimum increment: 1.0
Recommended increment: 10.0
Restriction: MeasureLength1 < Radius
Result
If the parameters are valid, the operator add_metrology_object_circle_measure returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
36 CHAPTER 2 2D METROLOGY
’start_phi’: The parameter specifies the angle at the start point of an elliptic arc. The angle at the start point is
measured relative to the positive main axis specified with Phi and corresponds to the smallest surrounding
circle of the ellipse. The actual start point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. The angle refers to the coordinate system of
the ellipse, i.e., it is specified relative to the main axis and in a mathematical positive direction. Thus, the two
main poles correspond to the angles 0 and π, the two minor poles to the angle π/2 and 3π/2. To create a
closed ellipse the value of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set
to 2π (with positive point order). The input value is mapped internally automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 0.0
’end_phi’: The parameter specifies the angle at the end point of an elliptic arc. The angle at the end point are
measured relative to the positive main axis specified with Phi and corresponds to the smallest surrounding
circle of the ellipse. The actual end point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. The angle refers to the coordinate system of
the ellipse, i.e., it is specified relative to the main axis and in a mathematical positive direction. Thus, the two
main poles correspond to the angles 0 and π, the two minor poles to the angle π/2 and 3π/2. To create a
closed ellipse the value of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set
to 2π (with positive point order). The input value is mapped automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 6.28318
’point_order’: The parameter specifies the direction of the elliptic arc. For the value ’positive’, the elliptic arc
is defined between ’start_phi’ and ’end_phi’ in mathematically positive direction (counterclockwise). For
the value ’negative’, the elliptic arc is defined between ’start_phi’ and ’end_phi’ in mathematically negative
direction (clockwise).
List of values: ’positive’, ’negative’
Default: ’positive’
Additionally, all generic parameters that are available for the operator set_metrology_object_param can
be set. But note that for a lot of applications the default values are sufficient and no adjustment is necessary.
Parameters
HALCON 24.11.1.0
38 CHAPTER 2 2D METROLOGY
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The handle of the model is passed in MetrologyHandle.
Shape specifies which type of object is added to the metrology model. The operator
add_metrology_object_generic returns the index of the added metrology object in the parame-
ter Index. Note that add_metrology_object_generic provides the functionality of the operators
add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_rectangle2_measure and add_metrology_object_line_measure
in one operator.
Possible shapes
Depending on the object specified in Shape the following values are expected:
’circle’: The geometric shape of the metrology object of type circle is specified by its center (Row, Column) and
radius.
ShapeParam=[Row, Column, Radius]
’rectangle2’: The geometric shape of the metrology object of type rectangle is specified by its center (Row, Col-
umn), the orientation of the main axis Phi, and the half edge lengths Length1 and Length2. The input value
for Phi is mapped automatically to the interval ] − π, π].
ShapeParam=[Row, Column, Phi, Length1, Length2]
’ellipse’: The geometric shape of the metrology object of type ellipse is specified by its center (Row, Column),
the orientation of the main axis Phi, the length of the larger half axis Radius1, and the length of the smaller
half axis Radius2. The input value for Phi is mapped automatically to the interval ] − π, π].
ShapeParam=[Row, Column, Phi, Radius1, Radius2]
’line’: The geometric shape of the metrology object of type line is described by the coordinates of its start point
(RowBegin, ColumnBegin) and the coordinates of its end point (RowEnd, ColumnEnd).
ShapeParam=[RowBegin, ColumnBegin, RowEnd, ColumnEnd]
Definition of measure regions
add_metrology_object_generic also prepares the rectangular measure regions. The rectangular measure
regions lie perpendicular to the boundary of the object. The half edge lengths of the measure regions perpendicular
and tangential to the boundary of the object are set in MeasureLength1 and MeasureLength2. The centers
of the measure regions lie on the boundary of the object. The parameter MeasureSigma specifies a standard
deviation that is used by the operator apply_metrology_model to smooth the gray values of the image.
Salient edges can be selected with the parameter MeasureThreshold, which constitutes a threshold on the
amplitude, i.e., the absolute value of the first derivative of the edge.
Generic parameters
Generic parameters and their values can be specified using GenParamName and GenParamValue. All
generic parameters that are available in the operator set_metrology_object_param can also be set in
add_metrology_object_generic. But note that for a lot of applications the default values are sufficient
and no adjustment is necessary. Furthermore, the following values for GenParamName and GenParamValue
are available only for Shape = ’circle’ and ’ellipse’:
’start_phi’: The parameter specifies the angle at the start point of a circular or elliptic arc. For an ellipse, the angle
at the start point is measured relative to the positive main axis and corresponds to the smallest surrounding
circle of the ellipse. The actual start point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. To create a closed circle or ellipse the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 0.0
’end_phi’: The parameter specifies the angle at the end point of a circular or elliptic arc. For an ellipse, the angle
at the end point is measured relative to the positive main axis and corresponds to the smallest surrounding
circle of the ellipse. The actual end point of the ellipse is the intersection of the ellipse with the orthogonal
projection of the corresponding circle point onto the main axis. To create a closed circle or ellipse the value
of the parameter ’start_phi’ is set to 0 and the value of the parameter ’end_phi’ is set to 2π (with positive
point order). The input value is mapped internally automatically to the interval [0, 2π].
Suggested values: 0.0, 0.78, 6.28318
Default: 6.28318
HALCON 24.11.1.0
40 CHAPTER 2 2D METROLOGY
’point_order’: The parameter specifies the direction of the circular or elliptic arc. For the value ’positive’, the arc
is defined between ’start_phi’ and ’end_phi’ in mathematically positive direction (counterclockwise). For the
value ’negative’, the arc is defined between ’start_phi’ and ’end_phi’ in mathematically negative direction
(clockwise).
List of values: ’positive’, ’negative’
Default: ’positive’
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Type of the metrology object to be added.
Default: ’circle’
List of values: Shape ∈ {’circle’, ’ellipse’, ’rectangle2’, ’line’}
. ShapeParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer
Parameters of the metrology object to be added.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1 ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureLength2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions tangential to the boundary.
Default: 5.0
Suggested values: MeasureLength2 ∈ {3.0, 5.0, 10.0}
Value range: 1.0 ≤ MeasureLength2 ≤ 511.0 (lin)
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Sigma of the Gaussian function for the smoothing.
Default: 1.0
Suggested values: MeasureSigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ MeasureSigma ≤ 100 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
. MeasureThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum edge amplitude.
Default: 30.0
Suggested values: MeasureThreshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ MeasureThreshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’distance_threshold’, ’end_phi’, ’instances_outside_measure_regions’,
’max_num_iterations’, ’measure_distance’, ’measure_interpolation’, ’measure_select’, ’measure_transition’,
’min_score’, ’num_instances’, ’num_measures’, ’point_order’, ’rand_seed’, ’start_phi’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3, 4, 5, 10, 20, ’all’, ’true’, ’false’, ’first’, ’last’, ’positive’,
’negative’, ’uniform’, ’nearest_neighbor’, ’bilinear’, ’bicubic’}
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .integer ; integer
Index of the created metrology object.
Example
create_metrology_model (MetrologyHandle)
read_image (Image, 'fabrik')
get_image_size (Image, Width, Height)
set_metrology_model_image_size (MetrologyHandle, Width, Height)
LinePar := [45,360,415,360]
RectPar1 := [270,232,rad(0),30,25]
RectPar2 := [360,230,rad(0),30,25]
LinePar := [45,360,415,360]
RectPar3 := [245,320,rad(-90),70,35]
* Add two rectangles
add_metrology_object_generic (MetrologyHandle, 'rectangle2', \
[RectPar1,RectPar2], 20, 5, 1, 30, [], [], \
Indices)
* Add a rectangle and a line
add_metrology_object_generic (MetrologyHandle, ['rectangle2','line'], \
[RectPar3,LinePar], 20, 5, 1, 30, [], [], \
Index)
get_metrology_object_model_contour (Contour, MetrologyHandle, 'all', 1.5)
apply_metrology_model (Image, MetrologyHandle)
get_metrology_object_result_contour (Contour1, MetrologyHandle, 'all', \
'all', 1.5)
Result
If the parameters are valid, the operator add_metrology_object_generic returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
add_metrology_object_line_measure ( : : MetrologyHandle,
RowBegin, ColumnBegin, RowEnd, ColumnEnd, MeasureLength1,
MeasureLength2, MeasureSigma, MeasureThreshold, GenParamName,
GenParamValue : Index )
HALCON 24.11.1.0
42 CHAPTER 2 2D METROLOGY
add_metrology_object_rectangle2_measure ( : : MetrologyHandle,
Row, Column, Phi, Length1, Length2, MeasureLength1,
MeasureLength2, MeasureSigma, MeasureThreshold, GenParamName,
GenParamValue : Index )
HALCON 24.11.1.0
44 CHAPTER 2 2D METROLOGY
of the rectangle. The half edge lengths of the measure regions perpendicular and tangential to the boundary of
the rectangle are set in MeasureLength1 and MeasureLength2. The centers of the measure regions lie on
the boundary of the rectangle. The parameter MeasureSigma specifies a standard deviation that is used by the
operator apply_metrology_model to smooth the gray values of the image. Salient edges can be selected
with the parameter MeasureThreshold, which constitutes a threshold on the amplitude, i.e., the absolute value
of the first derivative of the edge.
Furthermore, you can adjust some generic parameters within GenParamName and GenParamValue. In par-
ticular, all generic parameters that are available in the operator set_metrology_object_param can be set.
But note that for a lot of applications the default values are sufficient and no adjustment is necessary.
The operator add_metrology_object_rectangle2_measure returns the index of the added metrology
object within the metrology model in the parameter Index.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.y(-array) ; real / integer
Row (or Y) coordinate of the center of the rectangle.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.center.x(-array) ; real / integer
Column (or X) coordinate of the center of the rectangle.
. Phi (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.angle.rad(-array) ; real / integer
Orientation of the main axis [rad].
. Length1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hwidth(-array) ; real / integer
Length of the larger half edge of the rectangle.
. Length2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . rectangle2.hheight(-array) ; real / integer
Length of the smaller half edge of the rectangle.
. MeasureLength1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions perpendicular to the boundary.
Default: 20.0
Suggested values: MeasureLength1 ∈ {10.0, 20.0, 30.0}
Value range: 1.0 ≤ MeasureLength1
Minimum increment: 1.0
Recommended increment: 10.0
Restriction: MeasureLength1 < Length1 && MeasureLength1 < Length2
. MeasureLength2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Half length of the measure regions tangential to the boundary.
Default: 5.0
Suggested values: MeasureLength2 ∈ {3.0, 5.0, 10.0}
Value range: 1.0 ≤ MeasureLength2
Minimum increment: 1.0
Recommended increment: 10.0
. MeasureSigma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Sigma of the Gaussian function for the smoothing.
Default: 1.0
Suggested values: MeasureSigma ∈ {0.4, 0.6, 0.8, 1.0, 1.5, 2.0, 3.0, 4.0, 5.0, 7.0, 10.0}
Value range: 0.4 ≤ MeasureSigma ≤ 100.0
Minimum increment: 0.01
Recommended increment: 0.1
. MeasureThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum edge amplitude.
Default: 30.0
Suggested values: MeasureThreshold ∈ {5.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 90.0, 110.0}
Value range: 1 ≤ MeasureThreshold ≤ 255 (lin)
Minimum increment: 0.5
Recommended increment: 2
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’distance_threshold’, ’instances_outside_measure_regions’,
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_model_image_size
Possible Successors
align_metrology_model, apply_metrology_model
Alternatives
add_metrology_object_generic
See also
get_metrology_object_model_contour, set_metrology_model_param,
add_metrology_object_circle_measure, add_metrology_object_ellipse_measure,
add_metrology_object_line_measure
Module
2D Metrology
HALCON 24.11.1.0
46 CHAPTER 2 2D METROLOGY
The region extracted with threshold is shown in red. The rectangle computed with
smallest_rectangle2 is shown in green.
1. Setting the reference system
In the image in which the metrology model was defined, extract a region containing the metrology
objects. The pose of this region with respect to the image coordinate system is determined and set as the
reference system of the metrology model using set_metrology_model_param. This step is only
performed once when setting up the metrology model.
Example:
threshold(Image, Region, 0, 50)
smallest_rectangle2(Region, RowOrig, ColumnOrig, AngleOrig, Length1,
Length2)
set_metrology_model_param(MetrologyHandle, ’reference_system’,
[RowOrig, ColumnOrig, AngleOrig])
2. Determining the alignment
In an image where the metrology model occurs in a different Pose, the current pose if the extracted
region is determined. This pose is then used to align the metrology model.
Example:
threshold(CurrentImage, Region, 0, 50)
smallest_rectangle2(Region, RowAlign, ColumnAlign, AngleAlign,
Length1, Length2)
align_metrology_model(MetrologyHandle, RowAlign, ColumnAlign,
AngleAlign)
Using a shape model:
If a shape model is used to align the metrology model, the reference system with respect to which the metrol-
ogy objects are given has to be set so that it coincides with the coordinate system used by the shape model.
Only then, the results (’row’, ’column’, ’angle’) of get_generic_shape_model_result can be used
directly in align_metrology_model to align the metrology model in the current image. The individual
steps that are needed, are shown below.
(1) (2)
(1) The contours of the metrology object and the four corresponding points in the image that was used for the
creation of the metrology model. (2) The contours of the metrology object and the four corresponding points
in a new image.
1. Determine the point correspondences
2. Estimate the model pose
The following operator sequence calculates the parameters of the model pose (Row, Column, Angle)
from corresponding points in the model image and one other image.
Example:
vector_to_rigid(PRowModel, PColumnModel, PRowCurrent, PColumnCurrent,
HomMat2D)
hom_mat2d_to_affine_par(HomMat2D, Sx, Sy, Angle, Theta, Row, Column)
align_metrology_model(MetrologyHandle, Row, Column, Angle)
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Row coordinate of the alignment.
Default: 0
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Column coordinate of the alignment.
Default: 0
. Angle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad ; real / integer
Rotation angle of the alignment.
Default: 0
Example
HALCON 24.11.1.0
48 CHAPTER 2 2D METROLOGY
create_metrology_model (MetrologyHandle)
get_image_size (Image, Width, Height)
set_metrology_model_image_size (MetrologyHandle, Width, Height)
CircleParam := [354,274,53]
CircleParam := [CircleParam,350,519,53]
add_metrology_object_generic (MetrologyHandle, 'circle', CircleParam, 20,\
5, 1, 30, [], [], CircleIndices)
create_generic_shape_model (ModelID)
set_generic_shape_model_param (ModelID, 'metric', 'use_polarity')
set_generic_shape_model_param (ModelID, 'min_contrast', 20)
train_generic_shape_model (Image, ModelID)
* Determine location of shape model origin
area_center (Image, Area, RowOrigin, ColOrigin)
set_metrology_model_param (MetrologyHandle, 'reference_system', \
[RowOrigin,ColOrigin,0])
read_image (CurrentImage, 'metal-parts/circle_plate_02')
find_generic_shape_model (CurrentImage, ModelID, MatchResultID, \
NumMatchResult)
get_generic_shape_model_result (MatchResultID, 'all', 'row', Row)
get_generic_shape_model_result (MatchResultID, 'all', 'column', Col)
get_generic_shape_model_result (MatchResultID, 'all', 'angle', Angle)
align_metrology_model (MetrologyHandle, Row, Col, Angle)
apply_metrology_model (CurrentImage, MetrologyHandle)
get_metrology_object_result (MetrologyHandle, CircleIndices, 'all', \
'result_type', 'all_param', Rectangle)
get_metrology_object_result_contour (Contour, MetrologyHandle, \
CircleIndices, 'all', 1.5)
Result
If the parameters are valid, the operator align_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Measure and fit the geometric shapes of all metrology objects of a metrology model.
apply_metrology_model locates the edges inside the measure regions of the metrology objects of the metrol-
ogy model MetrologyHandle within Image and fits the corresponding geometric shapes to the resulting edge
positions.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The measurements are performed as follows:
Determining the edge positions
Within the measure regions of the metrology objects, the positions of the edges are determined. The edge location
is calculated internally with the operator measure_pos or fuzzy_measure_pos. The latter is used if at least
one fuzzy function was set for the metrology objects with set_metrology_object_fuzzy_param.
Fitting geometric shapes to the edge positions
The geometric shapes of the metrology objects are adapted to fit optimally to the resulting edge positions. In
particular, a RANSAC algorithm is used to select a set of initial edge positions that is necessary to create an
instance of the specific geometric shape, e.g., three edge positions are selected for a metrology object of type
circle. Then, those edge positions that are near the corresponding instance of the geometric shape are de-
termined and, if the number of suitable edge positions is sufficient (see the generic parameter ’min_score’ of
set_metrology_object_param), are selected for the final fitting of the geometric shape. If the number
of suitable edge positions is not sufficient, another set of initial edge positions is tested until a suitable selection
of edge positions is found. Into the edge positions that are selected for the final fitting, the geometric shape is
fitted and its parameters are stored in the metrology model. Note that more than one instance for each metrol-
ogy object is returned if the generic parameter ’num_instances’ is set to value larger than 1. This and other
parameters can be set when adding the metrology objects to the metrology model or separately with the operator
set_metrology_object_param. Note that for each instance of the metrology object different initial edge
positions are used, i.e., a second instance is based on edge positions that were not already used for the fitting of the
first instance. The algorithm stops either when ’num_instances’ instances were found or if the remaining number
of suitable initial edge positions is too low for a further fitting of the geometric shape.
Accessing the results
The results of the measurements can be accessed from the metrology model using
get_metrology_object_result. Note that if more than one instance of an object is returned,
the order of the returned instances is arbitrary and therefore no measure for the quality of the fit-
ting. Note further that if the parameters ’camera_param’ and ’plane_pose’ were set for the metrology
model using set_metrology_model_param, world coordinates are used for the fitting. Other-
wise, image coordinates are used. The XLD contours for the measured objects can be obtained using
get_metrology_object_result_contour.
Attention
Note that all measure regions of all metrology objects must be recomputed if the width or the height
of the input Image is not equal to the width and height stored in the metrology object (e.g., set with
set_metrology_model_image_size). This leads to longer execution times of the operator.
Note further that apply_metrology_model ignores the domain of Image for efficiency reasons (see also
measure_pos).
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2 / real
Input image.
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
Result
If the parameters are valid, the operator apply_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
50 CHAPTER 2 2D METROLOGY
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_metrology_object_generic, add_metrology_object_circle_measure,
add_metrology_object_ellipse_measure, add_metrology_object_line_measure,
add_metrology_object_rectangle2_measure, align_metrology_model,
set_metrology_model_param, set_metrology_object_param
Possible Successors
get_metrology_object_result, get_metrology_object_result_contour,
get_metrology_object_measures
See also
set_metrology_object_fuzzy_param, read_metrology_model, write_metrology_model
Module
2D Metrology
clear_metrology_model ( : : MetrologyHandle : )
copy_metrology_model ( : : MetrologyHandle,
Index : CopiedMetrologyHandle )
HALCON 24.11.1.0
52 CHAPTER 2 2D METROLOGY
Result
If the parameters are valid, the operator copy_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
create_metrology_model ( : : : MetrologyHandle )
Result
If the parameters are valid, the operator create_metrology_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Module
2D Metrology
deserialize_metrology_model (
: : SerializedItemHandle : MetrologyHandle )
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_metrology_model
Possible Successors
get_metrology_object_param, get_metrology_object_fuzzy_param,
apply_metrology_model
Module
2D Metrology
get_metrology_model_param ( : : MetrologyHandle,
GenParamName : GenParamValue )
Get parameters that are valid for the entire metrology model.
get_metrology_model_param queries parameters that are valid for the entire metrology model.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle.
The following generic parameter names for GenParamName are possible:
’camera_param’: The internal camera parameters that are set for the metrology model.
’plane_pose’: The 3D pose of the measurement plane that is set for the metrology model. The 3D pose is given in
camera coordinates.
’reference_system’: The rotation and translation of the current reference coordinate system with respect to the
image coordinate system. The tuple returned in GenParamValue contains [row, column, angle].
HALCON 24.11.1.0
54 CHAPTER 2 2D METROLOGY
’scale’: The scaling factor or unit of the results of the measurement returned by
get_metrology_object_result.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter.
Default: ’camera_param’
List of values: GenParamName ∈ {’camera_param’, ’plane_pose’, ’scale’, ’reference_system’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Value of the generic parameter.
Result
If the parameters are valid, the operator get_metrology_model_param returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information
’fuzzy_thresh’: The meaning and the use of this parameter is equivalent to the parameter FuzzyThresh of the
operator fuzzy_measure_pos and is described there.
’function_contrast’: With this parameter the fuzzy function of type contrast that is set in the operator
set_metrology_object_param can be queried. The meaning and the use of this parameter is equiv-
alent to the parameter SetType with the value ’contrast’ of the operator set_fuzzy_measure and is
described there. The return value GenParamValue contains the function of the metrology object.
’function_position’: With this parameter the fuzzy function of type position that is set in the operator
set_metrology_object_param can be queried. Because only one fuzzy function of a type can be set,
only the last set function can be returned. The type can be ’function_position’, ’function_position_center’,
’function_position_end’, ’function_position_first_edge’, or ’function_position_last_edge’.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: ’fuzzy_thresh’
List of values: GenParamName ∈ {’function_contrast’, ’function_position’, ’fuzzy_thresh’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer
Values of the generic parameters.
Result
If the parameters are valid, the operator get_metrology_object_fuzzy_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
56 CHAPTER 2 2D METROLOGY
Possible Predecessors
read_metrology_model
Possible Successors
get_metrology_object_param, get_metrology_object_fuzzy_param
See also
get_metrology_object_num_instances
Module
2D Metrology
Get the measure regions and the results of the edge location for the metrology objects of a metrology model.
get_metrology_object_measures allows to access the measure regions of the metrology objects that were
created with add_metrology_object_generic, add_metrology_object_circle_measure, etc.
as XLD contours and the results of the edge location in image coordinates that was performed by
apply_metrology_model.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The metrology model is defined by the handle MetrologyHandle. The parameter Index determines for which
metrology objects the information is accessed. With Index set to ’all’, the measure regions and the results of the
edge location for all metrology objects are accessed.
If positive and negative edges are available in the measure regions (see the generic parameter value ’mea-
sure_transition’ of the operator set_metrology_object_param), with the parameter Transition the
desired edges (positive or negative) can be selected. If Transition is set to ’positive’, only positive edges are
returned. If Transition is set to ’negative’, only negative edges are returned. All edges are returned if the
parameter Transition is set to ’all’.
The operator get_metrology_object_measures returns for each measure region one rectangular
XLD contour with the boundary of the measure region in the parameter Contours. After calling
apply_metrology_model, additionally the image coordinates of the results of the edge location are returned
as single points in the parameters Row and Column. Note that the order for the values of these points is not de-
fined. Furthermore, there is no possibility to assign the results of the edge location to specific measure regions. If
get_metrology_object_measures is called before apply_metrology_model, the parameters Row
and Column remain empty.
Parameters
HALCON 24.11.1.0
58 CHAPTER 2 2D METROLOGY
Result
If the parameters are valid, the operator get_metrology_object_model_contour returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
get_metrology_object_num_instances ( : : MetrologyHandle,
Index : NumInstances )
HALCON 24.11.1.0
60 CHAPTER 2 2D METROLOGY
’object_type’: Type of the geometric shape of the metrology object. For a metrology object of type circle, the
output parameter GenParamValue contains the value ’circle’. For a metrology object of type ellipse,
the output parameter GenParamValue contains the value ’ellipse’. For a metrology object of type
line, the output parameter GenParamValue contains the value ’line’. For a metrology object of type
rectangle, the output parameter GenParamValue contains the value ’rectangle’.
’object_params’: The parameters of the geometric shape of the metrology object. For a metrology object of
type circle, the output parameter GenParamValue contains the geometry of the circle in the following
order: ’row’, ’column’, ’radius’. The meaning and the use of these parameters is described with the
operator add_metrology_object_circle_measure. For a metrology object of type ellipse,
the output parameter GenParamValue contains the geometry of the ellipse in the following order:
’row’, ’column’, ’phi’, ’radius1’, ’radius2’. The meaning and the use of these parameters is described
with the operator add_metrology_object_ellipse_measure. For a metrology object of type
line, the output parameter GenParamValue contains the geometry of the line in the following order:
’row_begin’, ’column_begin’, ’row_end’, ’column_end’. The meaning and the use of these parameters
is described with the operator add_metrology_object_line_measure. For a metrology object
of type rectangle, the output parameter GenParamValue contains the geometry of the rectangle in
the following order: ’row’, ’column’, ’phi’, ’length1’, ’length2’. The meaning and the use of these
parameters is described with the operator add_metrology_object_rectangle2_measure.
• Only valid for a metrology object of type circle:
’row’, ’column’, ’radius’: These are parameters for a metrology object of type cir-
cle. The meaning and the use of these parameters is described with the operator
add_metrology_object_circle_measure.
• Only valid for a metrology object of type ellipse:
’row’, ’column’, ’phi’, ’radius1’, ’radius2’: These are parameters for a metrology object of type el-
lipse. The meaning and the use of these parameters is described with the operator
add_metrology_object_ellipse_measure.
• Only valid for a metrology object of type line:
’row_begin’, ’column_begin’, ’row_end’, ’column_end’: These are parameters for a metrology object
of type line. The meaning and the use of these parameters is described with the operator
add_metrology_object_line_measure.
• Only valid for a metrology object of type rectangle:
’row’, ’column’, ’phi’, ’length1’, ’length2’: These are parameters for a metrology object of type rect-
angle. The meaning and the use of these parameters is described with the operator
add_metrology_object_rectangle2_measure.
Parameters
Result
If the parameters are valid, the operator get_metrology_object_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
Possible Predecessors
get_metrology_object_indices, set_metrology_object_param
Possible Successors
set_metrology_object_param
See also
get_metrology_object_fuzzy_param, get_metrology_object_num_instances
Module
2D Metrology
’result_type’: If GenParamName is set to ’result_type’, then GenParamValue allows to control how and what
results are returned for a metrology object. All measured parameters of the queried metrology object can be
queried at once, specific parameters can be queried individually or the score for the metrology object can be
queried.
’Obtaining all parameters’: If GenParamValue is set to ’all_param’, then all measured parame-
ters of a metrology object are returned. If camera parameters and a pose have been set (see
set_metrology_model_param), the results are returned in metric coordinates, otherwise in pixels.
For a circle, the return values are the coordinates of the center and the radius of the circle. The order is
[’row’, ’column’, ’radius’] or [’x’, ’y’, ’radius’] respectively.
For an ellipse, the return values are the coordinates of the center, the orientation of the major axis
’phi’, the length of the larger half axis ’radius1’, and the length of the smaller half axis ’radius2’ of the
ellipse. The order is [’row’, ’column’, ’phi’, ’radius1’, ’radius2’] or [’x’, ’y’, ’phi’, ’radius1’, ’radius2’]
respectively.
For a line, the start and end point of the line is returned. The order is [’row_begin’, ’column_begin’,
’row_end’, ’column_end’] or [’x_begin’, ’y_begin’, ’x_end’, ’y_end’]
For a rectangle, the return values are the coordinates of the center, the orientation of the main axis
’phi’, the length of the larger half edge ’length1’, and the length of the smaller half edge ’length2’ of
the rectangle. The order is [’row’, ’column’, ’phi’, ’length1’, ’length2’] or [’x’, ’y’, ’phi’, ’length1’,
’length2’] respectively.
’Obtaining specific parameters’: Measured object parameters can also be queried individually by providing
the desired parameter name in GenParamName.
HALCON 24.11.1.0
62 CHAPTER 2 2D METROLOGY
When no camera parameters and no measurement plane are set, the following parameters can be queried
individually, depending on whether they are available for the respective object. Note that for lines,
additionally the 3 parameters of the hessian normal form can be queried, i.e., the unit normal vector
’nrow’, ’ncolumn’ and the orthogonal distance ’distance’ of the line from the origin of the coordinate
system. The sign of the distance determines the side of the line on which the origin is located.
List of values: ’row’, ’column’, ’radius’, ’phi’ , ’radius1’, ’radius2’, ’length1’, ’length2’, ’row_begin’,
’column_begin’, ’row_end’, ’column_end’, ’nrow’, ’ncolumn’, ’distance’
If camera parameters and a measurement plane was set, the parameters are returned in metric coordi-
nates, the following parameters can be queried individually, depending on whether they are available for
the respective object. Note that for lines, additionally the 3 parameters of the hessian normal form can
be queried, i.e., the unit normal vector ’nx’, ’ny’ and the orthogonal distance ’distance’ of the line from
the origin of the coordinate system. The sign of the distance determines the side of the line on which the
origin is located.
List of values: ’x’, ’y’, ’radius’, ’phi’ , ’radius1’, ’radius2’, ’length1’, ’length2’, ’radius1’, ’radius2’,
’length1’, ’length2’, ’x_begin’, ’y_begin’, ’x_end’, ’y_end’, ’nx’, ’ny’, ’distance’
’Obtaining the score’: If GenParamName is set to the ’score’, the fitting scores are returned. The score
represents the number of measurements that are used for the calculation of the results divided by the
maximum number of measure regions.
’used_edges’: To query the edge points, that were actually used for a fitted metrology object, you can choose
between following values for GenParamValue:
’row’: Return the row coordinate of the edges that were used to fit the metrology object.
’column’: Return the column coordinate of the edges that were used to fit the metrology object.
’amplitude’: Return the edge amplitude of the edges that were used to fit the metrology object.
List of values: ’row’, ’column’, ’amplitude’
’angle_direction’: The parameter determines the rotation direction for angles that result from the fitting. Setting
the parameter ’angle_direction’ to ’positive’ the angle is specified between the main axis of the object and the
horizontal axis of the coordinate system in the mathematically positive direction (counterclockwise). Setting
the parameter ’angle_direction’ to ’negative’ the angle is specified between the main axis of the object and
the horizontal axis of the coordinate system in the mathematically negative direction (clockwise). The results
of the angles are returned in radians.
List of values: ’positive’, ’negative’
Default: ’positive’
It is possible to query the results of several metrology objects (see the parameter Index) and several instances
(see the parameter Instance) of the metrology objects simultaneously. The results are returned in the following
order in Parameter: 1st instance of 1st metrology object, 2nd instance of 1st metrology object, etc., 1st instance
of 2nd metrology object, 2nd instance of 2nd metrology object, etc.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the metrology object.
Default: 0
Suggested values: Index ∈ {’all’, 0, 1, 2}
. Instance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Instance of the metrology object.
Default: ’all’
Suggested values: Instance ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Name of the generic parameter.
Default: ’result_type’
List of values: GenParamName ∈ {’result_type’, ’angle_direction’, ’used_edges’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string / real
Value of the generic parameter.
Default: ’all_param’
Suggested values: GenParamValue ∈ {’all_param’, ’score’, ’true’, ’false’, ’row’, ’column’, ’amplitude’,
’radius’, ’phi’, ’radius1’, ’radius2’, ’length1’, ’length2’, ’row_begin’, ’column_begin’, ’row_end’,
’column_end’, ’nrow’, ’ncolumn’, ’distance’, ’x’, ’y’, ’x_begin’, ’y_begin’, ’x_end’, ’y_end’, ’nx’, ’ny’,
’positive’, ’negative’}
. Parameter (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Result values.
Result
If the parameters are valid, the operator get_metrology_object_result returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
Possible Predecessors
apply_metrology_model
Possible Successors
clear_metrology_model
See also
get_metrology_object_result_contour, get_metrology_object_measures
Module
2D Metrology
get_metrology_object_result_contour (
: Contour : MetrologyHandle, Index, Instance, Resolution : )
HALCON 24.11.1.0
64 CHAPTER 2 2D METROLOGY
reset_metrology_object_fuzzy_param ( : : MetrologyHandle,
Index : )
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_object_fuzzy_param
See also
reset_metrology_object_param
Module
2D Metrology
HALCON 24.11.1.0
66 CHAPTER 2 2D METROLOGY
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
Result
If the parameters are valid, the operator reset_metrology_object_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
set_metrology_object_param
See also
reset_metrology_object_fuzzy_param
Module
2D Metrology
serialize_metrology_model (
: : MetrologyHandle : SerializedItemHandle )
HALCON 24.11.1.0
68 CHAPTER 2 2D METROLOGY
Set parameters that are valid for the entire metrology model.
set_metrology_model_param sets or changes parameters that are valid for the entire metrology model
MetrologyHandle.
For an explanation of the concept of 2D metrology see the introduction of chapter 2D Metrology.
The following values for GenParamName and GenParamValue are possible:
Calibration
If both internal camera parameters and the 3D pose of the measurement plane are set,
apply_metrology_model calculates the results in metric coordinates.
’camera_param’: Often the internal camera parameters are the result of calibrating the camera with the operator
calibrate_cameras (see Calibration for the sequence of the parameters and the underlying camera
model). It is possible to discard the internal camera parameters by setting ’camera_param’ to [].
Default: []
’plane_pose’: The 3D pose of the measurement plane in camera coordinates. It is possible to discard the pose by
setting ’plane_pose’ to [].
Default: []
’reference_system’: The tuple given in GenParamValue should contain [row, column, angle]. By default the
reference system is the image coordinate system which has its origin in the top left corner. A new reference
system is defined with respect to the image coordinate system by its translation (row,colum) and its rotation
angle (angle). All components of the metrology model are converted into the new reference coordinate
system. In the following figure, the reference system of the metrology model is set to the center of the image.
set_metrology_model_param(MetrologyHandle, ’reference_system’,
[Height/2,Width/2,0])
(1) (2)
(1) Several metrology objects and their contours are shown in blue. (2) The new reference system for the
metrology model is placed in the center of the image. As a consequence, the positions and orientations of
the metrology objects are moved into the reverse direction. The resulting contours of the metrology objects
are shown in blue.
Default: [0, 0, 0]
’scale’: The parameter ’scale’ must be specified as the ratio of the desired unit to the original unit. If no camera
parameters are given, the default unit is pixel.
If ’camera_param’ and ’plane_pose’ are set, the original unit is determined by the coordinates of the cal-
ibration object. Standard HALCON calibration plates are defined in metric coordinates. If it was used for
the calibration, the desired unit can be set directly. The relation of units to scaling factors is given in the
following table:
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter.
Default: ’camera_param’
List of values: GenParamName ∈ {’camera_param’, ’plane_pose’, ’scale’, ’reference_system’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Value of the generic parameter.
Default: []
Suggested values: GenParamValue ∈ {1.0, 0.1, ’m’, ’cm’, ’mm’, ’microns’, ’um’}
Result
If the parameters are valid, the operator set_metrology_model_param returns the value 2 (H_MSG_TRUE).
If necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
70 CHAPTER 2 2D METROLOGY
• MetrologyHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_metrology_model, set_metrology_model_image_size
Possible Successors
add_metrology_object_generic, get_metrology_object_model_contour
See also
set_metrology_object_param, align_metrology_model, get_metrology_model_param
Module
2D Metrology
’fuzzy_thresh’: The parameter specifies the minimum fuzzy value. The meaning and the use of this parameter
is described with the operator fuzzy_measure_pos. There, the parameter corresponds to the parameter
FuzzyThresh.
Default: 0.5
’function_contrast’: The parameter specifies a fuzzy function of type contrast. The meaning and the use of
this parameter is described with the operator set_fuzzy_measure. There, the parameter corresponds to
the parameter SetType with the value ’contrast’ and its value corresponds to the parameter Function.
Default: ’disabled’
’function_position’: The parameter specifies a fuzzy function of type position. The meaning and the use of
this parameter is described with the operator set_fuzzy_measure. There, the parameter corresponds to
the parameter SetType with the value ’position’ and its value corresponds to the parameter Function.
Default: ’disabled’
’function_position_center’: The parameter specifies a fuzzy function of type position_center. The meaning
and the use of this parameter is described with the operator set_fuzzy_measure. There, the parameter
corresponds to the parameter SetType with the value ’position’ and its value corresponds to the parameter
Function.
Default: ’disabled’
’function_position_end’: The parameter specifies a fuzzy function of type position_end. The meaning and
the use of this parameter is described with the operator set_fuzzy_measure. There, the parameter cor-
responds to the parameter SetType with the value ’position_end’ and its value corresponds to the parameter
Function.
Default: ’disabled’
A fuzzy function is discarded if the fuzzy function value is set to ’disabled’. All pre-
viously defined fuzzy functions and fuzzy parameters can be discarded completely using
reset_metrology_object_fuzzy_param. The current configuration of the metrology objects can
be accessed with get_metrology_object_fuzzy_param. Note that if at least one fuzzy function is
specified, the operator fuzzy_measure_pos is used for the edge detection.
Parameters
. MetrologyHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . metrology_model ; handle
Handle of the metrology model.
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; string / integer
Index of the metrology objects.
Default: ’all’
Suggested values: Index ∈ {’all’, 0, 1, 2}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: ’fuzzy_thresh’
List of values: GenParamName ∈ {’function_contrast’, ’function_position’, ’function_position_center’,
’function_position_end’, ’function_position_first_edge’, ’function_position_last_edge’, ’fuzzy_thresh’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer
Values of the generic parameters.
Default: 0.5
Suggested values: GenParamValue ∈ {0.1, 0.3, 0.5, 0.6, 0.7, 0.9, 1, 2, 3, 4, 5, 10, 20}
Result
If the parameters are valid, the operator set_metrology_object_fuzzy_param returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
72 CHAPTER 2 2D METROLOGY
’measure_length1’: The value of this parameter specifies the half length of the measure regions perpendicular to
the metrology object boundary. It is equivalent to the measure tolerance. The unit of this value is pixel.
Suggested values: 10.0, 20.0, 30.0
Default: 20.0
Restriction: ’measure_length1’ >= 1.0
’measure_length2’: The value of this parameter specifies the half length of the measure regions tangential to the
metrology object boundary. The unit of this value is pixel.
Suggested values: 3.0, 5.0, 10.0
Default: 5.0
Restriction: ’measure_length2’ >= 0.0
’measure_distance’: The value of this parameter specifies the desired distance between the centers of two measure
regions. If the value leads to too few measure regions, the parameter has no influence and the number of
measure regions will be increased to the minimum required number of measure regions (circle = 3, ellipse =
5, line = 2, rectangle = 2 per side = 8). The unit of this value is pixel.
If this value is set, the parameter ’num_measures’ has no influence.
Suggested values: 5.0, 15.0, 20.0, 30.0
Default: 10.0
’num_measures’: The value of this parameter specifies the desired number of measure regions.
The minimum number of measure regions depends on the type of the metrology object:
• Line: 2 measure regions
• Circle: 3 measure regions
• Circular arc: 4 measure regions
• Ellipse: 5 measure regions
• Elliptic arc: 6 measure regions
• Rectangle: 8 measure regions (2 regions each side)
If the chosen value is too low, ’num_measures’ is automatically set to the respective minimum value.
If this value is set, the parameter ’measure_distance’ has no influence.
Suggested values: 8, 10, 16, 20, 30, 50, 100
Edge detection:
’measure_sigma’: The parameter specifies the sigma for the Gaussian smoothing. The meaning, the use, and the
default value of this parameter are described with the operator measure_pos by the parameter Sigma.
’measure_threshold’: The parameter specifies the minimum edge amplitude. The meaning, the use, and the default
value of this parameter are described with the operator measure_pos by the parameter Threshold.
’measure_select’: The parameter specifies the selection of end points of the edges. The meaning, the use, and the
default value of this parameter are described with the operator measure_pos by the parameter Select.
’measure_transition’: The parameter specifies the use of dark/light or light/dark edges. The meaning and the use
of the values ’all’, ’positive’, and ’negative’ for the parameter ’measure_transition’ is described with the
operator measure_pos by the parameter Transition. Additionally, ’measure_transition’ can be set to
the value ’uniform’. Then, all positive edges (dark/light edges) and all negative edges (light/dark edges) are
detected by the edge detection but when fitting the geometric shapes, the edges with different edge types are
used separately, i.e., for each instance of a geometric shape either only the positive edges or the negative
edges are used.
The measure direction within the measure regions is from the inside to the outside of the metrology object
for objects of the types circle, ellipse, or rectangle. For metrology objects of the type line measure direction
within the measure regions is from the left to the right, seen from the first point of the line (see RowBegin
and ColumnBegin of the operator add_metrology_object_line_measure).
List of values: ’all’, ’negative’, ’positive’, ’uniform’
Default: ’all’
’measure_interpolation’: The parameter specifies the type of interpolation to be used. The meaning, the use and
the default value of this parameter is described with the operator gen_measure_rectangle2 by the
parameter Interpolation.
’min_score’: The parameter determines what score a potential instance must at least have to be regarded as a valid
instance of the metrology object. The score is the number of detected edges that are used to compute the
results divided by the maximum number of measure regions (see apply_metrology_model). If it can
be expected that all edges of the metrology object are present, the parameter ’min_score’ can be set to a value
as high as 0.8 or even 0.9. Note that in images with a high degree of clutter or strong background texture the
parameter ’min_score’ should be set to a value not much lower than 0.7 since otherwise false instances of a
metrology object could be found.
Suggested values: 0.5, 0.7, 0.9
Default: 0.7
’num_instances’: The parameter specifies the maximum number of successfully fitted instances of each metrology
object after which the fitting will stop (see apply_metrology_model). Successfully fitted instances of
the metrology objects must have a score of at least the value of ’min_score’.
Suggested values: 1, 2, 3, 4
Default: 1
’distance_threshold’: apply_metrology_model uses a randomized search algorithm (RANSAC) to fit the
geometric shapes. An edge point is considered to be part of a fitted geometric shape, if the distance of the
edge point to the geometric shape does not exceed the value of ’distance_threshold’.
Suggested values: 0, 1.0, 2.0, 3.5, 5.0
Default: 3.5
’max_num_iterations’: The RANSAC algorithm estimates the number of iterations necessary for fitting the re-
quested geometric shape. The estimation is based on the extracted edge data and the complexity of the shape.
When setting the value of the parameter ’max_num_iterations’, an upper limit for the computed number of
iterations is defined. The number of iterations is still estimated by the RANSAC algorithm but cannot exceed
the value of ’max_num_iterations’. Setting this parameter can be helpful, if the quality of the fitting is not
as important as observing time limits. However, if ’max_num_iterations’ is set too low, the algorithm will
return low-quality or no results.
By default, ’max_num_iterations’ is set to -1, indicating that no additional upper limit is set for the number
of iterations of the RANSAC algorithm.
Suggested values: 10, 100, 1000
Default: -1
’rand_seed’: The parameter specifies the seed for the random number generator for the RANSAC algorithm that
is used by the selection of the edges the in operator apply_metrology_model. If the value of the
parameter ’rand_seed’ is set to a number unequal to the value 0, the operator yields the same result on every
call with the same parameters, because the internally used random number generator is initialized with the
value of the parameter ’rand_seed’.
HALCON 24.11.1.0
74 CHAPTER 2 2D METROLOGY
If the parameter ’rand_seed’ is set to the value 0, the random number generator is initialized with the current
time. In this case, the results are not reproducible.
Suggested values: 0, 1, 42
Default: 42
’instances_outside_measure_regions’: The parameter specifies the validation of the results of measurements. If
the value of the parameter ’instances_outside_measure_regions’ is set to the value ’false’, only result-
ing instances of an metrology object are valid that are inside the major axis of the measure regions of
this metrology object. Instances which are not valid are not stored. If the value of the parameter ’in-
stances_outside_measure_regions’ is set to the value ’true’, all instances of a metrology object are valid.
List of values: ’true’, ’false’
Default: ’false’
Parameters
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
get_metrology_object_param
Possible Successors
apply_metrology_model, reset_metrology_object_param,
get_metrology_object_param
See also
set_metrology_object_fuzzy_param
Module
2D Metrology
HALCON 24.11.1.0
76 CHAPTER 2 2D METROLOGY
3D Matching
This chapter gives an overview of the different 3D matching approaches available in HALCON.
3D Box Finder
As it is already contained in its name, the box finder can be used to locate box-shaped objects in 3D data. Thereby,
no model of the object is needed as an input for the operator find_box_3d, but only the dimensions of the boxes
to be found. As a result you can retrieve the pose of a gripping point, which can be especially useful in the case of
a bin picking application.
(1) (2)
(1) 3D input data (scene), (2) found instance, including a gripping point.
Surface-Based Matching
The surface-based matching approach is suited to locate more complex objects as well. The shape of these objects
is passed to the operator find_surface_model, or find_surface_model_image respectively, in the
form of a surface model. The poses of the found object instances in the scene are then returned.
Note that there are several different approaches when using surface-based matching. For detailed explanations
regarding when and how to use these approaches, tips, tricks, and troubleshooting, have a look at the technical note
on Surface-Based Matching.
77
78 CHAPTER 3 3D MATCHING
Shape-Based Matching
With shape-based matching, instances of a 3D CAD model are searched in 2D images instead of 3D point
clouds. For this, the edges of the wanted object need to be clearly visible in the image and the used cam-
era needs to be calibrated beforehand. As a result, the object pose is computed and returned by the operator
find_shape_model_3d.
Deep 3D Matching
Deep 3D Matching is a deep-learning-based approach to detect objects in a scene and compute their 3D pose. For
further information please see the chapter 3D Matching / Deep 3D Matching.
3.1 3D Box
1. Obtain the 3D data either as XYZ-images, or directly as a 3D object model with XYZ-mapping.
HALCON 24.11.1.0
80 CHAPTER 3 3D MATCHING
2. Remove as much background and clutter that is not part of any box from the scene as possible, in order to in-
crease robustness and speed. Therefore, e.g., use threshold and reduce_domain on the XYZ-images
before calling xyz_to_object_model_3d. Further options are described in the section “Troubleshoot-
ing” below.
3. If the 3D data exists in the form of XYZ-images, convert them to a 3D object model using
xyz_to_object_model_3d.
4. Obtain the approximate box edge lengths that should be found. Note that changing those lengths later on
might make it necessary to also change other parameters, such as MinScore.
5. Call find_box_3d, passing the 3D object model with the scene and the approximate box edge lengths.
6. Use the procedure visualize_object_model_3d to visualize the results, if necessary.
results: This key references a dictionary containing the found boxes. They are sorted according to their score
in descending order with ascending integer keys starting at 0.
Each box result is a dictionary with the following keys:
box_pose: This is the box’s pose in the coordinate system of the scene. This pose is used for visualizing
the found box.
box_length_x, box_length_y, box_length_z: The side lengths of the found box corresponding
to box_pose. box_length_x and box_length_y will always contain a positive number. If only
a single side of the box is visible, box_length_z will be set to 0.
gripping_pose: The same pose as returned in GrippingPose.
gripping_length_x, gripping_length_y, gripping_length_z: The side lengths of the
found box corresponding to GrippingPose. gripping_length_x and gripping_length_y
will always contain a positive number. If only a single side of the box is visible,
gripping_length_z will be set to 0.
score: The same score as returned in Score.
one_side_only: Boolean indicating whether only one side of the box is visible (’true’) or not (’false’).
gen_param: This is a dictionary with the parameters passed to find_box_3d. SideLen1, SideLen2, and
SideLen3 are pooled in a tuple with key lengths. The key min_score references MinScore. The
other keys are denoted analogously to the generic parameters of the dictionary GenParam.
sampled_edges: This is the 3D object model with sampled edges. It contains the viewing direction of the edge
points as normal vectors.
sampled_edges_direction: This is the 3D object model with sampled edges (same as for key
sampled_edges. It contains the edge directions of the edge points as normal vectors.
sampled_scene: This is the sampled scene in which the boxes are looked for. It can be used for visualization
or debugging the sampling distance.
sampled_reference_points: This is a 3D object model with all points from the 3D scene that were used as
reference points in the matching process. For each reference point, the optimum pose of the box is computed
under the assumption that the reference point lies on the surface of the box.
Generic Parameters
Additional parameters can be passed as key/tuple pairs in the dictionary GenParam in order to improve the
matching process. The following parameter names serve as keys to their corresponding tuples (see create_dict
and set_dict_tuple).
3d_edges: Allows to manually set the 3D scene edges. The parameter must be a 3D object model handle. The
edges are usually a result of the operator edges_object_model_3d but can further be filtered in order
to remove outliers. If this parameter is not given, find_box_3d will internally extract the 3D edges similar
to the operator edges_object_model_3d.
3d_edge_min_amplitude: Sets the minimum amplitude of a discontinuity in order for it to be classi-
fied as an edge. Note that if edges were passed manually with the generic parameter 3d_edges, this
parameter is ignored. Otherwise, it behaves similar to the parameter MinAmplitude of the operator
edges_object_model_3d.
Restriction: 3d_edge_min_amplitude >= 0
Default: 10% of the smallest box diagonal.
max_gap: If no edges are passed with 3d_edges, the operator will extract 3D edges internally. The parameter
can be used to control the edge extraction.
max_gap has the same meaning as in edges_object_model_3d.
remove_outer_edges: Removes the outermost edges when set to ’true’. This is for example helpful for bin
picking applications in order to remove the bin.
List of values: ’false’, ’true’
Default: ’false’
max_num_boxes: Limits the number of returned boxes. By default, find_box_3d will return all detected
boxes with a score larger than MinScore. This parameter can be used to limit the number of boxes respec-
tively.
Default: 0 (return all boxes)
box_type: Sets the type of boxes to search for. For ’full_box_visible’ only boxes with more than one side visible
are returned. If ’single_side_visible’ is set, boxes with only one visible side are searched for. If further box
sides are visible nonetheless, they are ignored. For ’all’ both types are returned.
List of values: ’all’, ’single_side_visible’, ’full_box_visible’
Default: ’all’
Troubleshooting
Visualizing extracted edges and sampled scene: To debug the box detector, some of the internally used data can
be visualized by obtaining it from the returned dictionary BoxInformation, using get_dict_tuple.
The sampled 3D scene can be extracted with the key sampled_scene. Finding smaller boxes requires a
denser sampling and subsequently slows down the box detection.
The sampled 3D edges can be extracted with the key sampled_edges and
sampled_edges_directions. Both 3D object models contain the same points, however,
sampled_edges contains the viewing direction of the edge points as normal vectors, while
sampled_edges_directions contains the edge directions of the edge points as normal vectors.
Note that the edge directions should be perpendicular to the edges, pointing outwards of the boxes.
Improve performance: If find_box_3d is taking too long, the following steps might help to increase its per-
formance.
• Remove more background and clutter: A significant improvement in runtime and detection accuracy
can usually be achieved by removing as much of the background and clutter from the 3D scene as
possible.
The most common approaches for removing unwanted data are:
– Thresholding the X-, Y- and Z-coordinates, either by using threshold and reduce_domain
on the XYZ-images before calling xyz_to_object_model_3d, or by using
select_points_object_model_3d directly on the 3D object model that contains the
scene.
HALCON 24.11.1.0
82 CHAPTER 3 3D MATCHING
– Some sensors return an intensity image along with the 3D data. Filters on the intensity image can
be used to remove parts of the image that contain background.
– Use background subtraction. If the scene is static, for example, if the sensor is mounted in a fixed
position over a conveyor belt, the XYZ-images of the background can be acquired once without any
boxes in it. Afterwards, sub_image and threshold can be used on the Z-images to select parts
of the 3D data that are not part of the background.
• Increase minimum score: An increased minimum score MinScore might lead to more boxes being
removed earlier in the detection pipeline.
• Increase the smallest possible box: The smaller the smallest possible box side is, the slower
find_box_3d runs. For example, if all boxes are usually seen from a single side, it might make
sense to set SideLen3 to -1. Additionally, box_type can be set to limit the type of boxes that are
searched.
• Manually computing and filtering edges: The edges of the scene can be extracted manually, using
edges_object_model_3d, and passed to find_box_3d using the generic parameter 3d_edges
(see above). Thus, the manual extraction can be used as a further way of filtering the edges.
Parameters
Alternatives
find_surface_model
Module
3D Metrology
A possible example for a 3D Gripping Point Detection application: A 3D scene (e.g., an RGB image and
XYZ-images) is analyzed and possible gripping points are suggested.
HALCON provides a pretrained model which is ready for inference without an additional training step. To finetune
the model for a specific task, it is possible to retrain it on a custom application domain. 3D Gripping Point Detection
also works on objects that were not seen in training. Thus, there is no need to provide a 3D model of the objects
that are to be targeted. 3D Gripping Point Detection can also cope with scenes containing various different objects
at once, scenes with partly occluded objects, and with scenes containing cluttered 3D data.
The general inference workflow as well as the retraining are described in the following sections.
General Inference Workflow
This paragraph describes how to determine a suitable gripping point on arbitrary object surfaces using
a 3D Gripping Point Detection model. An application scenario can be seen in the HDevelop example
3d_gripping_point_detection_workflow.hdev.
• read_dl_model.
2. Set the model parameter regarding, e.g., the used devices or image dimensions using
• set_dl_model_param.
3. Generate a data dictionary DLSample for each 3D scene. This can be done using the procedure
• gen_dl_samples_3d_gripping_point_detection,
which can cope with different kinds of 3D data. For further information on the data requirements see the
section “Data” below.
4. Preprocessing of the data before the inference. For this, you can use the procedure
• preprocess_dl_samples.
The required preprocessing parameters can be generated from the model with
HALCON 24.11.1.0
84 CHAPTER 3 3D MATCHING
• create_dl_preprocess_param_from_model
• create_dl_preprocess_param.
Note that the preprocessing of the data has significant impact on the inference. See the section “3D scenes”
below for further details.
5. Apply the model using the operator
• apply_dl_model.
6. Perform a post-processing step on the resulting DLResult to retrieve gripping points for your scene using
the procedure
• gen_dl_3d_gripping_points_and_poses.
• dev_display_dl_data or
• dev_display_dl_3d_data, respectively.
Preprocess the data This part is about how to preprocess your data.
1. The information content of your dataset needs to be converted. This is done by the procedure
• read_dl_dataset_3d_gripping_point_detection.
It creates a dictionary DLDataset which serves as a database and stores all necessary information
about your data. For more information about the data and the way it is transferred, see the section
“Data” below and the chapter Deep Learning / Model.
2. Split the dataset represented by the dictionary DLDataset. This can be done using the procedure
• split_dl_dataset.
3. The network imposes several requirements on the images. These requirements (for example the image
size and gray value range) can be retrieved with
• get_dl_model_param.
For this you need to read the model first by using
• read_dl_model.
4. Now you can preprocess your dataset. For this, you can use the procedure
• preprocess_dl_dataset.
To use this procedure, specify the preprocessing parameters as, e.g., the image size. Store all the pa-
rameter with their values in a dictionary DLPreprocessParam, for which you can use the procedure
• create_dl_preprocess_param_from_model.
We recommend to save this dictionary DLPreprocessParam in order to have access to the prepro-
cessing parameter values later during the inference phase.
Training of the model This part explains the finetuning of the 3D Gripping Point Detection model by retraining
it.
1. Set the training parameters and store them in the dictionary TrainParam. This can be done using the
procedure
• create_dl_train_param.
2. Train the model. This can be done using the procedure
• train_dl_model.
The procedure expects:
• the model handle DLModelHandle,
• the dictionary DLDataset containing the data information,
• the dictionary TrainParam containing the training parameters.
Evaluation of the retrained model In this part, we evaluate the 3D Gripping Point Detection model.
Data
This section gives information on the data that needs to be provided for the model inference or training and
evaluation of a 3D Gripping Point Detection model.
As a basic concept, the model handles data by dictionaries, meaning it receives the input data from a dictionary
DLSample and returns a dictionary DLResult. More information on the data handling can be found in the
chapter Deep Learning / Model.
3D scenes 3D Gripping Point Detection processes 3D scenes, which consist of regular 2D images and depth
information.
In order to adapt these 3D data to the network input requirements, a preprocessing step is necessary for the
inference. See the section “Specific Preprocessing Parameters” below for information on certain preprocess-
ing parameters. It is recommended to use a high resolution 3D sensor, in order to ensure the necessary data
quality. The following data are needed:
2D image
• RGB image, or
• intensity (gray value) image
Intensity image.
Depth information
HALCON 24.11.1.0
86 CHAPTER 3 3D MATCHING
• Z-image (values need to increase from points close to the sensor to far points; this is for example
the case if the data is given in the camera coordinate system)
Normals (optional)
Normals image.
Providing normal images improves the runtime, as this avoids the need for their computation.
In order to restrict the search area, the domain of the RGB/intensity image can be reduced. For details, see
the section “Specific Preprocessing Parameters” below. Note that the domain of the XYZ-images and the
(optional) normals images need to be identical. Furthermore, for all input data, only valid pixels may be part
of the used domain.
Data for Training and Evaluation The training data is used to train and evaluate a network specifically for your
application.
The dataset needed for this consists of 3D scenes and corresponding information on possible gripping sur-
faces given as segmentation images. They have to be provided in a way the model can process them. Con-
cerning the 3D scene requirements, find more information in the section “3D scenes” above.
How the data has to be formatted in HALCON for a DL model is explained in the chapter Deep Learning /
Model. In short, a dictionary DLDataset serves as a database for the information needed by the training
and evaluation procedures.
The data for DLDataset can be read using read_dl_dataset_3d_gripping_point_detection.
See the reference of read_dl_dataset_3d_gripping_point_detection for information on the
required contents of a 3D Gripping Point Detection DLDataset.
Along with 3D scenes, segmentation images need to be provided, which function as the ground truth. The
segmentation images contain two gray values that denote every pixel in the scene to be either a valid gripping
point or not. You can label your data using the MVTec Deep Learning Tool, available from the MVTec
website.
(1) (2)
(1) Labeling of an intensity image. (2) Segmentation image, denoting gripping points (gray).
Make sure that the whole labeled area provides robust gripping points for the robot. Consider the following
aspects when labeling your data:
• Gripping points need to be on a surface that can be accessed by the robot arm without being obstructed.
• Gripping points need to be on a surface that the robot arm can grip with its suction cup. Therefore,
consider the object’s material, shape, and surface tilt with regard to the ground plane.
• Take the size of the robots suction cup into account.
• Take the strength of the suction cup into account.
• Tend to label gripping points near the object’s center of mass (especially for potentially heavier items).
• Gripping points should not be at an object’s border.
• Gripping points should not be at the border of visible object regions.
Model output As inference output, the model will return a dictionary DLResult for every sample. This dictio-
nary includes the following entries:
• ’gripping_map’: Binary image, indicating for each pixel of the scene whether the model predicted
a gripping point (pixel value = 1.0) or not (0.0).
• ’gripping_confidence’: Image, containing raw, uncalibrated confidence values for every point
in the scene.
mean_pro Mean overlap of all ground truth regions labeled as gripping class with the predictions (Per-Region
Overlap). See the paper referenced below for a detailed description of this evaluation measure.
mean_precision Mean pixel-level precision of the predictions for the gripping class. The precision is the
proportion of true positives to all positives (true (TP) and false (FP) ones).
TP
precision =
TP + FP
mean_iou Intersection over union (IoU) between the ground truth pixels and the predicted pixels of the gripping
class. See Deep Learning / Semantic Segmentation and Edge Extraction for a detailed description of this
evaluation measure.
HALCON 24.11.1.0
88 CHAPTER 3 3D MATCHING
gripping_point_precision Proportion of true positives to all positives (true and false ones).
For this measure, a true positive is a correctly predicted gripping point, meaning the predicted point is
located within a ground truth region. However, only one gripping point per region is considered a true
positive, additional predictions in the same region are considered false positives.
gripping_point_recall The recall is the proportion of the number of correctly predicted gripping points
to the number of all ground truth regions of the gripping class.
TP
recall =
TP + FN
gripping_point_f_score To represent precision and recall with a single number, we provide the F-score,
the harmonic mean of precision and recall.
precision ∗ recall
F-score = 2 ∗
precision + recall
Postprocessing
The model results DLResult can be postprocessed with gen_dl_3d_gripping_points_and_poses in
order to generate gripping points. Furthermore, this procedure can be parameterized in order to reject small grip-
ping regions using min_area_size, or serve as a template to define custom selection criteria.
The procedure adds the following entry to the dictionary DLResult:
– ’region’: Connected region of potential gripping points. The determined gripping point lies inside
this region.
– ’row’: Row coordinate of the gripping point in the preprocessed RGB/intensity image.
– ’column’: Column coordinate of the gripping point in the preprocessed RGB/intensity image.
– ’pose’: 3D pose of the gripping point (relative to the coordinate system of the XYZ-images, i.e., of
the camera) which can be used by the robot.
• ’min_z’, ’max_z’: Determine the allowed distance from the camera for 3D points based on the Z-image.
These parameters can therefore help to reduce erroneous outliers and therefore increase the application
robustness.
A restriction of the search area can be done by reducing the domain of the input images (using reduce_domain).
The way preprocess_dl_samples handles the domain is set using the preprocessing parameter
’domain_handling’. The parameter ’domain_handling’ should be used in a way that only essential
information is passed on to the network for inference. The following images show how an input image with
reduced domain is passed on after the preprocessing step depending on the set ’domain_handling’.
References
Bergmann, P., Batzner, K., Fauser, M., Sattlegger, D. and Steger, C., 2021. The MVTec anomaly detection dataset:
a comprehensive real-world dataset for unsupervised anomaly detection. International Journal of Computer Vision,
129(4), pp.1038-1059.
A possible example for a Deep 3D Matching application: Images from different angles are used to detect an
object. As a result the 3D pose of the object is computed.
The Deep 3D Matching model consists of two components, which are dedicated to two distinct tasks, the detection,
which localizes objects, and the estimation of object poses. For a Deep 3D Matching application, both components
need to be trained on the 3D CAD model of the object to be found in the application scenes.
Note: For now only inference is possible in HALCON, the custom training of a model will be available in a future
version of HALCON. If you want to use the feature for your applications, please contact your HALCON sales
partner for further information.
Once trained, the deep learning model can be used to infer the pose of the object in new application scenes. During
the inference process, images from different angles are used as input.
General Inference Workflow
This paragraph describes how to determine a 3D pose using the Deep 3D Matching method. An application
scenario can be seen in the HDevelop example deep_3d_matching_workflow.hdev.
• read_deep_matching_3d.
2. Optimize the deep learning network for the use with -interfaces
(a) Extract the detection network from the deep 3d matching model using
HALCON 24.11.1.0
90 CHAPTER 3 3D MATCHING
• get_deep_matching_3d_param.
(b) Optimize the parameter for inference with
• optimize_dl_model_for_inference.
(c) Set the optimized detection network using
• set_deep_matching_3d_param.
(d) Repeat these steps for the 3D pose estimation network.
(e) Save the optimized model using
• write_deep_matching_3d.
Note that the optimization of the model has significant impact on the runtime, if it is done with every
inference run. So writing the optimized model saves time in the inference.
• set_deep_matching_3d_param.
• apply_deep_matching_3d.
Multi-View Camera Setup In order to use Deep 3D Matching with high accuracy you need a calibrated stereo
or multi-view camera setup. In comparison to stereo reconstruction, Deep 3D Matching can deal with more
strongly varying camera constellations and distances. Also there is no need to use 3D sensors in the setup.
For information how to calibrate the used setup, please refer to the chapter Calibration / Multi-View.
The objects to be detected must be captured from two or more different perspectives in order to calculate the
3D poses.
(1) (2)
Example setups for Deep 3D Matching: Scenes are recorded by several cameras, the objects to be detected
do not have to be seen by every single camera (but by at least two cameras).
Data for Training and Evaluation The training data is used to train and evaluate a Deep 3D Matching model
specifically for your application.
The required training data is generated using CAD models. Synthetic images of the object are created
from various angles, lighting conditions, and backgrounds. Note that there are no real images required, the
required data is generated based of the CAD model.
The data needed for this is a CAD model and corresponding information on material, surface finish and color.
Information about possible axial and radial symmetries can significantly improve the generated training data.
apply_deep_matching_3d (
Images : : Deep3DMatchingModel : DeepMatchingResults )
1. Object Detection The object detection deep learning model is used to find instances of the target object in all
images.
2. 3D pose estimation The pose estimation deep learning model is used to estimate the 3D pose of all instances
found in the previous step. Poses of the same object found in different images are combined into a single
instance.
3. Pose Refinement The poses found in the previous step are further refined using edges visible in the image.
Additionally, their score is computed.
4. Filter Results The detected instances are filtered using the minimum score (’min_score’), the minimum num-
ber of cameras in which instances must be visible (’min_num_views’), as well es the maximum number of
instances to return (’num_matches’).
Result Format
The results are returned in DeepMatchingResults as a dictionary. The dictionary key ’results’ contains all
detected results. Each result has the following keys:
’score’:
The score of the result instance.
’pose’:
The pose of the result instance in world coordinate systems.
’cameras’:
A tuple of integers containing the camera indices in which the instance was detected in.
HALCON 24.11.1.0
92 CHAPTER 3 3D MATCHING
Parameters
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / real
Input images.
. Deep3DMatchingModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep_matching_3d ; handle
Deep 3D matching model.
. DeepMatchingResults (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict-array ; handle
Results.
Execution Information
get_deep_matching_3d_param ( : : Deep3DMatchingModel,
GenParamName : GenParamValue )
’dl_model_detection’, ’dl_model_pose_estimation’:
The deep learning models used for Deep 3D Matching. Both models are already pre-trained
for the target object. They can be obtained and written back in order to, optimize it using
optimize_dl_model_for_inference or change the device on which they are executed.
’min_num_views’:
This parameter determines the minimum number of cameras in which an instance must be visible in order to
be returned by apply_deep_matching_3d. The parameter can be either an integer larger than zero, or
the string ’auto’. If ’auto’, instances must be visible in a single camera if only a single camera is used, and
in at least two cameras otherwise.
Suggested values: ’auto’, 2, 3
Default: ’auto’
Value range: ≥ 0 .
’min_score’:
This parameter determines the minimum score of detected instances. In other words,
apply_deep_matching_3d ignores all detected instances with a score smaller than this value.
The score computed by the Deep 3D Matching model lies between 0 and 1, where 0 indicates a bad match
and 1 is a very good match.
Value range: [0, . . . , 1]
Default: 0.2
’num_matches’:
This parameter determines the maximum number of matches to return by apply_deep_matching_3d.
If the operator finds more instances than set in ’num_matches’, only the ’num_matches’ instances with the
highest scores are returned. This parameter can be set to zero, in which case all instances above ’min_score’
are returned.
Value range: ≥ 0 .
Default: 0
’orig_3d_model’:
This parameter returns the original 3D CAD model used for creating the Deep 3D Matching model. It can be
used to, visualize detection results.
Attention
Deep 3D Matching requires images to not have too much of a distortion. It is recommended
to remove any distortion from the camera parameters and images beforehand, using, for example,
change_radial_distortion_cam_par in combination with change_radial_distortion_image
or gen_radial_distortion_map and map_image.
Parameters
. Deep3DMatchingModel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . deep_matching_3d ; handle
Deep 3D Matching model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of parameter.
Default: ’min_score’
Suggested values: GenParamName ∈ {’min_score’, ’num_matches’, ’orig_3d_model’, ’min_num_views’,
’dl_model_detection’, ’dl_model_pose_estimation’, ’camera_parameter’, ’camera_pose’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer / handle
Obtained value of parameter.
Execution Information
HALCON 24.11.1.0
94 CHAPTER 3 3D MATCHING
set_deep_matching_3d_param ( : : Deep3DMatchingModel,
GenParamName, GenParamValue : )
Execution Information
add_deformable_surface_model_reference_point (
: : DeformableSurfaceModel, ReferencePointX, ReferencePointY,
ReferencePointZ : ReferencePointIndex )
HALCON 24.11.1.0
96 CHAPTER 3 3D MATCHING
Reference points are defined in model coordinates, i.e., in the coordinate frame of the model parameter of
create_deformable_surface_model. The operators find_deformable_surface_model and
refine_deformable_surface_model return the position of all added reference points as found in the
scene.
Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
. ReferencePointX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
x-coordinates of a reference point.
. ReferencePointY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
x-coordinates of a reference point.
. ReferencePointZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
x-coordinates of a reference point.
. ReferencePointIndex (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Index of the new reference point.
Result
add_deformable_surface_model_reference_point returns 2 (H_MSG_TRUE) if all parameters are
correct. If necessary, an exception is raised.
Execution Information
add_deformable_surface_model_sample ( : : DeformableSurfaceModel,
ObjectModel3D : )
Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the deformed 3D object model.
Result
add_deformable_surface_model_sample returns 2 (H_MSG_TRUE) if all parameters are correct. If
necessary, an exception is raised.
Execution Information
clear_deformable_surface_matching_result (
: : DeformableSurfaceMatchingResult : )
HALCON 24.11.1.0
98 CHAPTER 3 3D MATCHING
clear_deformable_surface_model ( : : DeformableSurfaceModel : )
create_deformable_surface_model ( : : ObjectModel3D,
RelSamplingDistance, GenParamName,
GenParamValue : DeformableSurfaceModel )
Note that the direction and orientation (inward or outward) of the normals of the model are important for matching.
The deformable surface model is created by sampling the 3D object model with a certain distance. The sampling
distance must be specified in the parameter RelSamplingDistance and is parametrized relative to the di-
ameter of the axis-parallel bounding box of the 3D object model. For example, if RelSamplingDistance
is set to 0.05 and the diameter of ObjectModel3D is 10 cm, the points sampled from the object’s
surface will be approximately 5 mm apart. The sampled points can be obtained with the operator
get_deformable_surface_model_param using the value ’sampled_model’. Note that outlier points in
the object model should be avoided, as they would corrupt the diameter. Reducing RelSamplingDistance
leads to more points, and in turn to a more stable but slower matching. Increasing RelSamplingDistance
leads to less points, and in turn to a less stable but faster matching.
’model_invert_normals’: Invert the orientation of the surface normals of the model. The normal orientation needs
to be known for the model generation. If both the model and the scene are acquired with the same setup, the
normals will already point in the same direction. If the model was loaded from a CAD file, the normals might
point into the opposite direction. If you experience the effect that the model is found on the ’outside’ of the
scene surface and the model was created from a CAD file, try to set this parameter to ’true’. Also, make sure
that the normals in the CAD file all point either outward or inward, i.e., are oriented consistently.
List of values: ’false’, ’true’
Default: ’false’
’scale_min’ and ’scale_max’: The minimum and maximum allowed scaling of the model. Note that if you set one
of the two parameters, the other one must be set too.
Suggested values: 0.8, 1, 1.2
Default: No scaling
Restriction: 0 < ’scale_min’ < ’scale_max’
HALCON 24.11.1.0
100 CHAPTER 3 3D MATCHING
’bending_max’: Controls the maximum automatic deformation of the model. The model is deformed automati-
cally by bending it with an angle up to the value of ’bending_max’. This allows for deformations to be found
that are within this bending range. The angle is passed in degrees.
Suggested values: 5, 10, 30
Default: 20
Restriction: 0 <= ’bending_max’ < 90
’stiffness’: Control the stiffness of the model when performing the refinement. Larger values of this parameter
lead to a more stiff model that can be less deformed. Smaller values lead to a less stiff model that allows
more deformation.
Suggested values: 0.2, 0.5, 0.8
Default: 0.5
Restriction: 0 < ’stiffness’ <= 1
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Sampling distance relative to the object’s diameter
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.05, 0.03, 0.02, 0.01}
Restriction: 0 < RelSamplingDistance < 1
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters.
Default: []
Suggested values: GenParamName ∈ {’model_invert_normals’, ’scale_min’, ’scale_max’, ’bending_max’,
’stiffness’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’, 1, 0.9, 1.1, 5, 10, 20, 30, 0.05, 0.1, 0.2}
. DeformableSurfaceModel (output_control) . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model.
Result
create_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information
read_deformable_surface_model, add_deformable_surface_model_sample,
add_deformable_surface_model_reference_point, write_deformable_surface_model,
clear_deformable_surface_model
References
Bertram Drost, Slobodan Ilic: “Graph-Based Deformable 3D Object Matching.” Proceedings of the 37th German
Conference on Pattern Recognition, pp. 222-233, 2015.
Module
3D Metrology
deserialize_deformable_surface_model (
: : SerializedItemHandle : DeformableSurfaceModel )
find_deformable_surface_model ( : : DeformableSurfaceModel,
ObjectModel3D, RelSamplingDistance, MinScore, GenParamName,
GenParamValue : Score, DeformableSurfaceMatchingResult )
HALCON 24.11.1.0
102 CHAPTER 3 3D MATCHING
The operator find_deformable_surface_model finds the best match of the deformable surface model
DeformableSurfaceModel in the 3D scene ObjectModel3D. The deformable surface model must have
been created previously with, for example, create_deformable_surface_model.
The matching requires that the 3D object model ObjectModel3D contains points and normals. The scene shall
provide one of the following options:
It is important for an accurate pose that the normals of the scene and the model point in the same direction (see
’scene_invert_normals’). Note that triangles or polygons in the passed scene are ignored. Instead, only the vertices
are used for matching. It is thus in general not recommended to use this operator on meshed scenes, such as
CAD data. Instead, such a scene must be sampled beforehand using sample_object_model_3d to create
points and normals. When using noisy point clouds, e.g., from time-of-flight cameras, the generic parameter
’scene_normal_computation’ should be set to ’mls’ in order to obtain more robust results (see below).
First, points are sampled uniformly from the scene passed in ObjectModel3D. The sampling distance is con-
trolled with the parameter RelSamplingDistance, and is given relative to the diameter of the surface model.
Decreasing RelSamplingDistance leads to more sampled points, and in turn to a more stable but slower
matching. Increasing RelSamplingDistance reduces the number of sampled scene points, which leads to a
less stable but faster matching. For an illustration showing different values for RelSamplingDistance, please
refer to the operator create_deformable_surface_model.
The operator get_deformable_surface_matching_result can be used to retrieve the sampled scene
points for visual inspection. For a robust matching it is recommended that at least 50-100 scene points are sampled
for each object instance.
The method first finds an approximate position of the object. This position is then refined. The generic parameters
controlling the deformation are described further down.
If a match was found, the score of the match is returned in Score and a deformable surface match-
ing result handle is returned in DeformableSurfaceMatchingResult. Details of the matching re-
sult, such as the deformed model and the position of the reference points, can be queried with the operator
get_deformable_surface_matching_result using the result handle.
The score is normalized between 0 and 1 and represents the amount of model surface visible in the scene. A value
of 1 represents a perfect match. The parameter MinScore can be used to filter the result. A match is returned
only if its score exceeds the value of MinScore.
The parameters GenParamName and GenParamValue are used to set generic parameters. Both get a tuple
of equal length, where the tuple passed to GenParamName contains the names of the parameters to set, and the
tuple passed to GenParamValue contains the corresponding values. The possible parameter names and values
are described below.
’scene_normal_computation’: This parameter controls the normal computation of the sampled scene. In the de-
fault mode ’fast’, normals are computed based on a small neighborhood of points. In the mode ’mls’, nor-
mals are computed based on a larger neighborhood and using the more complex but more accurate ’mls’
method. A more detailed description of the ’mls’ method can be found in the description of the operator
surface_normals_object_model_3d. The ’mls’ mode is intended for noisy data, such as images
from time-of-flight cameras.
List of values: ’fast’, ’mls’
Default: ’fast’
’scene_invert_normals’: Invert the orientation of the surface normals of the scene. The orientation of surface
normals of the scene have to match with the orientation of the model. If both the model and the scene are
acquired with the same setup, the normals will already point in the same direction. If you experience the
effect that the model is found on the ’outside’ of the scene surface, try to set this parameter to ’true’. Also,
make sure that the normals in the scene all point either outward or inward, i.e., are oriented consistently.
List of values: ’false’, ’true’
Default: ’false’
’pose_ref_num_steps’: Number of iterations for the refinement. Increasing the number of iteration leads to a more
accurate position at the expense of runtime. However, once convergence is reached, the accuracy can no
longer be increased, even if the number of steps is increased.
Parameters
HALCON 24.11.1.0
104 CHAPTER 3 3D MATCHING
. DeformableSurfaceMatchingResult (output_control) . . . . . .
deformable_surface_matching_result(-array) ; handle
Handle of the matching result.
Result
find_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised.
Execution Information
get_deformable_surface_matching_result (
: : DeformableSurfaceMatchingResult, ResultName,
ResultIndex : ResultValue )
’sampled_scene’: A 3D object model handle is returned that contains the sampled scene points that were
used in the matching or refinement. This is helpful for tuning the sampling distance of the
scene (see parameter RelSamplingDistance of operators find_deformable_surface_model and
refine_deformable_surface_model). The parameter ResultIndex is ignored.
’rigid_pose’: If DeformableSurfaceMatchingResult was created by
find_deformable_surface_model, a rigid pose is returned that approximates
the deformable matching result. The parameter ResultIndex is ignored. This pa-
rameter is not available if DeformableSurfaceMatchingResult was created by
refine_deformable_surface_model.
’reference_point_x’:
’reference_point_y’:
’reference_point_z’: Returns the x-, y- or z-coordinates of a transformed reference point. The
reference point must have been added to the deformable surface model using the operator
add_deformable_surface_model_reference_point. The indices of the reference points to be
returned are passed in ResultIndex. If ’all’ is passed in ResultIndex, the position of all reference
points is returned.
’deformed_model’: Returns a deformed variant of the 3D object model that was originally passed to
create_deformable_surface_model. The 3D object model is deformed with the reconstructed
deformation. Triangles, polygons and extended attributes contained in the original 3D object model are
maintained. The parameter ResultIndex is ignored.
’deformed_sampled_model’: Returns a deformed variant of the 3D object model that was sampled by
create_deformable_surface_model. The returned 3D object model has the same number of points
as the original, undeformed sampled model, and the points are in the same order. Details about the sampling
are described in create_deformable_surface_model. The original, undeformed sampled model
can be obtained with get_deformable_surface_model_param. The parameter ResultIndex is
ignored.
Parameters
. DeformableSurfaceMatchingResult (input_control) . . . . . . deformable_surface_matching_result
; handle
Handle of the deformable surface matching result.
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the result property.
Default: ’sampled_scene’
List of values: ResultName ∈ {’sampled_scene’, ’rigid_pose’, ’reference_point_x’, ’reference_point_y’,
’reference_point_z’, ’deformed_model’, ’deformed_sampled_model’}
. ResultIndex (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the result property.
Default: 0
Suggested values: ResultIndex ∈ {0, 1, 2, 3, ’all’}
Restriction: ResultIndex >= 0
. ResultValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string / real / handle
Value of the result property.
Result
If the handle of the result is valid, the operator get_deformable_surface_matching_result returns
the value 2 (H_MSG_TRUE). If necessary an exception is raised.
Execution Information
Possible Predecessors
find_deformable_surface_model, refine_deformable_surface_model
Possible Successors
clear_deformable_surface_model
See also
find_deformable_surface_model, refine_deformable_surface_model,
read_deformable_surface_model, write_deformable_surface_model,
clear_deformable_surface_model
Module
3D Metrology
HALCON 24.11.1.0
106 CHAPTER 3 3D MATCHING
get_deformable_surface_model_param ( : : DeformableSurfaceModel,
GenParamName : GenParamValue )
’diameter’: Diameter of the model point cloud. The diameter is the length of the diagonal of the axis-parallel
bounding box.
’sampled_model’: The 3D points sampled from the model for matching. This returns a 3D object model that
contains all points sampled from the model surface for matching.
’training_models’: This returns all 3D object models that were used for the training of the de-
formable surface model. This includes the 3D object model passed to and sampled
by create_deformable_surface_model, and the 3D object models added with
add_deformable_surface_model_sample.
’reference_points_x’:
’reference_points_y’:
’reference_points_z’: Returns the x-, y- or z-coordinates of all reference points added with the operator
add_deformable_surface_model_reference_point.
Parameters
read_deformable_surface_model (
: : FileName : DeformableSurfaceModel )
refine_deformable_surface_model ( : : DeformableSurfaceModel,
ObjectModel3D, RelSamplingDistance, InitialDeformationObjectModel3D,
GenParamName, GenParamValue : Score,
DeformableSurfaceMatchingResult )
HALCON 24.11.1.0
108 CHAPTER 3 3D MATCHING
Alternatives
find_deformable_surface_model
See also
create_deformable_surface_model, find_deformable_surface_model
Module
3D Metrology
serialize_deformable_surface_model (
: : DeformableSurfaceModel : SerializedItemHandle )
write_deformable_surface_model ( : : DeformableSurfaceModel,
FileName : )
HALCON 24.11.1.0
110 CHAPTER 3 3D MATCHING
Parameters
. DeformableSurfaceModel (input_control) . . . . . . . . . . . . . . . . . . . . deformable_surface_model ; handle
Handle of the deformable surface model to write.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name to write to.
File extension: .dsfm
Result
write_deformable_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct and the HAL-
CON process has write permission to the file. If necessary, an exception is raised.
Execution Information
3.5 Shape-Based
clear_shape_model_3d ( : : ShapeModel3DID : )
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, write_shape_model_3d
Module
3D Metrology
’x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world coordi-
nate system points upwards in the image plane.
’-x’: The reference plane is the yz plane of the world coordinate system. The projected x axis of the world
coordinate system points downwards in the image plane.
’y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world coordi-
nate system points upwards in the image plane.
’-y’: The reference plane is the xz plane of the world coordinate system. The projected y axis of the world
coordinate system points downwards in the image plane.
’z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world coordi-
nate system points upwards in the image plane.
’-z’: The reference plane is the xy plane of the world coordinate system. The projected z axis of the world
coordinate system points downwards in the image plane.
Alternatively to the above values, an arbitrary normal vector can be specified in RefPlaneNormal, which is not
restricted to the coordinate axes. For this, a tuple of three values representing the three components of the normal
vector must be passed.
Note that the position of the optical center and the point at which the camera looks must differ from each other.
Furthermore, the normal vector of the reference plane and the z axis of the camera must not be parallel. Otherwise,
the camera pose is not well-defined.
create_cam_pose_look_at_point is particularly useful if a 3D object model or a 3D shape
model should be visualized from a certain camera position. In this case, the pose that is cre-
ated by create_cam_pose_look_at_point can be passed to project_object_model_3d or
project_shape_model_3d, respectively.
It is also possible to pass tuples of different length for different input parameters. In this case, internally the
maximum number of parameter values over all input control parameters is computed. This number is taken as
HALCON 24.11.1.0
112 CHAPTER 3 3D MATCHING
the number of output camera poses. Then, all input parameters can contain a single value or the same number of
values as output camera poses. In the first case, the single value is used for the computation of all camera poses,
while in the second case the respective value of the element in the parameter is used for the computation of the
corresponding camera pose.
Parameters
. CamPosX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the optical center of the camera.
. CamPosY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the optical center of the camera.
. CamPosZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the optical center of the camera.
. LookAtX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the 3D point to which the camera is directed.
. LookAtY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the 3D point to which the camera is directed.
. LookAtZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the 3D point to which the camera is directed.
. RefPlaneNormal (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string / real
Normal vector of the reference plane (points up).
Default: ’-y’
List of values: RefPlaneNormal ∈ {’x’, ’y’, ’z’, ’-x’, ’-y’, ’-z’}
. CamRoll (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real
Camera roll angle.
Default: 0
. CamPose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D camera pose.
Result
If the parameters are valid, the operator create_cam_pose_look_at_point returns the value 2
(H_MSG_TRUE). If necessary an exception is raised. If the parameters are chosen such that the pose is not well
defined, the error 8940 is raised.
Execution Information
The 3D shape model is generated by computing different views of the 3D object model within a user-specified
pose range. The views are automatically generated by placing virtual cameras around the 3D object model and
projecting the 3D object model into the image plane of each virtual camera position. For each such obtained view a
2D shape representation is computed. Thus, for the generation of the 3D shape model, no images of the object are
used but only the 3D object model, which is passed in ObjectModel3D. The shape representations of all views
are stored in the 3D shape model, which is returned in ShapeModel3DID. During the matching process with
find_shape_model_3d, the shape representations are used to find out the best-matching view, from which
the pose is subsequently refined and returned.
In order to create the model views correctly, the camera parameters of the camera that will be used for the
matching must be passed in CamParam. The camera parameters are necessary, for example, to determine
the scale of the projections by using the actual focal length of the camera. Furthermore, they are used to
treat radial distortions of the lens correctly. Consequently, it is essential to calibrate the camera by using
calibrate_cameras before creating the 3D shape model. On the one hand, this is necessary to obtain ac-
curate poses from find_shape_model_3d. On the other hand, this makes the 3D matching applicable even
when using lenses with significant radial distortions.
The pose range within which the model views are generated can be specified by the parameters RefRotX,
RefRotY, RefRotZ, OrderOfRotation, LongitudeMin, LongitudeMax, LatitudeMin,
LatitudeMax, CamRollMin, CamRollMax, DistMin, and DistMax. Note that the model will
only be recognized during the matching if it appears within the specified pose range. The parameters are described
in the following:
Before computing the views, the origin of the coordinate system of the 3D object model is moved to the refer-
ence point of the 3D object model, which is the center of the smallest enclosing axis-parallel cuboid and can be
queried by using get_object_model_3d_params. The virtual cameras, which are used to create the views,
are arranged around the 3D object model in such a way that they all look at the origin of the coordinate system,
i.e., the z axes of the cameras pass through the origin. The pose range can then be specified by restricting the
views to a certain quadrilateral on the sphere around the origin. This naturally leads to the use of the spheri-
cal coordinates longitude, latitude, and radius. The definition of the spherical coordinate system is chosen such
that the equatorial plane corresponds to the xz plane of the Cartesian coordinate system with the y axis point-
ing to the south pole (negative latitude) and the negative z axis pointing in the direction of the zero meridian
(see convert_point_3d_spher_to_cart or convert_point_3d_cart_to_spher for further de-
tails about the conversion between Cartesian and spherical coordinates). The advantage of this definition is that a
camera with the pose [0,0,z,0,0,0,0] has its optical center at longitude=0, latitude=0, and radius=z. In this case, the
radius represents the distance of the optical center of the camera to the reference point of the 3D object model.
The longitude range, for which views are to be generated, can be specified by LongitudeMin and
LongitudeMax, both given in radians. Accordingly, the latitude range can be specified by LatitudeMin
and LatitudeMax, also given in radians. LongitudeMin and LongitudeMax are adjusted to maintain a
range of 360° (2π). If an adjustment is possible, LongitudeMin and the range are preserved. The minimum
and maximum distance between the camera center and the model reference point is specified by DistMin and
DistMax. Thereby, the model origin is in the center of the smallest enclosing cuboid and does not necessarily
coincide with the origin of the CAD coordinate system. Note that the unit of the distance must be meters (assuming
that the parameter Scale has been correctly set when reading the CAD file with read_object_model_3d).
Finally, the minimum and the maximum camera roll angle can be specified in CamRollMin and CamRollMax.
This interval specifies the allowable camera rotation around its z axis with respect to the 3D object model. If the
image plane is parallel to the plane on which the objects reside and if it is known that the object may rotate in this
plane only in a restricted range, then it is reasonable to specify this range in CamRollMin and CamRollMax.
In all other cases the interpretation of the camera roll angle is difficult, and hence, it is recommended to set this
interval to [−π, +π]. Note that the larger the specified pose range is chosen the more memory the model will
consume (except from the range of the camera roll angle) and the slower the matching will be.
The orientation of the coordinate system of the 3D object model is defined by the coordinates within the CAD
file that was read by using read_object_model_3d. Therefore, it is reasonable to previously rotate the 3D
object model into a reference orientation such that the view that corresponds to longitude=0 and latitude=0 is ap-
proximately at the center of the pose range. This can be achieved by passing appropriate values for the reference
orientation in RefRotX, RefRotY, RefRotZ, and OrderOfRotation. The rotation is performed around the
axes of the 3D object model, which origin was set to the reference point. The longitude and latitude range can then
be interpreted as a variation of the 3D object model pose around the reference orientation. There are two possible
ways to specify the reference orientation. The first possibility is to specify three rotation angles in RefRotX,
RefRotY, and RefRotZ and the order in which the three rotations are to be applied in OrderOfRotation,
which can either be ’gba’ or ’abg’. The second possibility is to specify the three components of the Rodriguez
HALCON 24.11.1.0
114 CHAPTER 3 3D MATCHING
rotation vector in RefRotX, RefRotY, and RefRotZ. In this case, OrderOfRotation must be set to ’ro-
driguez’ (see create_pose for detailed information about the order of the rotations and the definition of the
Rodriguez vector).
Thus, two transformations are applied to the 3D object model before computing the model views within the pose
range. The first transformation is the translation of the origin of the coordinate systems to the reference point. The
second transformation is the rotation of the 3D object model to the desired reference orientation around the axes
of the reference coordinate system. By combining both transformations one obtains the reference pose of the 3D
shape model. The reference pose of the 3D shape model thus describes the pose of the reference coordinate system
with respect to the coordinate system of the 3D object model defined by the CAD file. Let t = (x, y, z)0 be the
coordinates of the reference point of the 3D object model and R be the rotation matrix containing the reference
orientation. Then, a point pm given in the 3D object model coordinate system can be transformed to a point pr in
the reference coordinate system of the 3D shape model by applying the following formula:
pr = R · (pm − t)
This transformation can be expressed by a homogeneous 3D transformation matrix or alternatively in terms of a
3D pose. The latter can be queried by passing ’reference_pose’ for the parameter GenParamName of the operator
get_shape_model_3d_params. The above formula can be best imagined as a pose of pose type 8, 10, or 12,
depending on the value that was chosen for OrderOfRotation (see create_pose for detailed information
about the different pose types). Note, however, that get_shape_model_3d_params always returns the pose
using the pose type 0. Finally, poses that are given in one of the two coordinate systems can be transformed to the
other coordinate system by using trans_pose_shape_model_3d.
Furthermore, it should be noted that the reference coordinate system is introduced only to specify the pose range
in a convenient way. The pose resulting from the 3D matching that is performed with find_shape_model_3d
always refers to the original 3D object model coordinate system used in the CAD file.
With MinContrast, it can be determined which edge contrast the model must at least have in the recognition
performed by find_shape_model_3d. In other words, this parameter separates the model from the noise in
the image. Therefore, a good choice is the range of gray value changes caused by the noise in the image. If,
for example, the gray values fluctuate within a range of 10 gray levels, MinContrast should be set to 10. If
multichannel images are used for the search images, the noise in one channel must be multiplied by the square root
of the number of channels to determine MinContrast. If, for example, the gray values fluctuate within a range
of 10 gray levels in a single channel and the image is a three-channel image, MinContrast should be set to 17.
If the model should be recognized in very low contrast images, MinContrast must be set to a correspondingly
small value. If the model should be recognized even if it is severely occluded, MinContrast should be slightly
larger than the range of gray value fluctuations created by noise in order to ensure that the pose of the model is
extracted robustly and accurately by find_shape_model_3d.
The parameters described above are application-dependent and must be always specified when creating a 3D
shape model. In addition, there are some generic parameters that can optionally be used to influence the model
creation. For most applications these parameters need not to be specified but can be left at their default val-
ues. If desired, these parameters and their corresponding values can be specified by using GenParamName and
GenParamValue, respectively. The following values for GenParamName are possible:
’num_levels’: For efficiency reasons the model views are generated on multiple pyramid levels. On higher levels
fewer views are generated than on lower levels. With the parameter ’num_levels’ the number of pyramid
levels on which model views are generated can be specified. It should be chosen as large as possible because
by this the time necessary to find the model is significantly reduced. On the other hand, the number of levels
must be chosen such that the shape representations of the views on the highest pyramid level are still recog-
nizable and contain a sufficient number of points (at least four). If not enough model points are generated for
a certain view, the view is deleted from the model and replaced by a view on a lower pyramid level. If for all
views on a pyramid level not enough model points are generated, the number of levels is reduced internally
until for at least one view enough model points are found on the highest pyramid level. If this procedure
would lead to a model with no pyramid levels, i.e., if the number of model points is too small for all views al-
ready on the lowest pyramid level, create_shape_model_3d returns an error message. If ’num_levels’
is set to ’auto’ (default value), create_shape_model_3d determines the number of pyramid levels au-
tomatically. In this case all model views on all pyramid levels are automatically checked whether their shape
representations are still recognizable. If the shape representation of a certain view is found to be not recog-
nizable, the view is deleted from the model and replaced by a view on a lower pyramid level. Note that if
’num_levels’ is set to ’auto’, the number of pyramid levels can be different for different views. In rare cases,
it might happen that create_shape_model_3d determines a value for the number of pyramid levels that
is too large or too small. If the number of pyramid levels is chosen too large, the model may not be recog-
nized in the image or it may be necessary to select very low parameters for MinScore or Greediness in
find_shape_model_3d in order to find the model. If the number of pyramid levels is chosen too small,
the time required to find the model in find_shape_model_3d may increase. In these cases, the views
on the pyramid levels should be checked by using the output of get_shape_model_3d_contours.
Suggested values: ’auto’, 3, 4, 5, 6
Default: ’auto’
’fast_pose_refinement’: The parameter specifies whether the pose refinement during the search with
find_shape_model_3d is sped up. If ’fast_pose_refinement’ is set to ’false’, for complex models with a
large number of faces the pose refinement step might amount to a significant part of the overall computation
time. If ’fast_pose_refinement’ is set to ’true’, some of the calculations that are necessary during the pose
refinement are already performed during the model generation and stored in the model. Consequently, the
pose refinement during the search will be faster. Please note, however, that in this case the memory con-
sumption of the model may increase significantly (typically by less than 30 percent). Further note that the
resulting poses that are returned by find_shape_model_3d might slightly differ depending on the value
of ’fast_pose_refinement’, because internally the pose refinement is approximated if the parameter is set to
’true’.
List of values: ’true’, ’false’
Default: ’true’
’lowest_model_level’: In some cases the model generation process might be very time consuming and the memory
consumption of the model might be very high. The reason for this is that in these cases the number of views,
which must be computed and stored in the model, is very high. The larger the pose range is chosen and
the larger the objects appear in the image (measured in pixels) the more views are necessary. Consequently,
especially the use of large images (e.g., images exceeding a size of 640 × 480) can result in very large mod-
els. Because the number of views is highest on lower pyramid levels, the parameter ’lowest_model_level’
can be used to exclude the lower pyramid levels from the generation of views. The value that is passed for
’lowest_model_level’ determines the lowest pyramid level down to which views are generated and stored
in the 3d shape model. If, for example, a value of 2 is passed for large models, the time to generate the
model as well as the size of the resulting model is reduced to approximately one third of the original values.
If ’lowest_model_level’ is not passed, views are generated for all pyramid levels, which corresponds to the
behavior when passing a value of 1 for ’lowest_model_level’. If for ’lowest_model_level’ a value larger than
1 is passed, in find_shape_model_3d the tracking of matches through the pyramid will be stopped at
this level. However, if in find_shape_model_3d a least-squares adjustment is chosen for pose refine-
ment, the matches are refined on the lowest pyramid level using the least-squares adjustment. Note that for
different values for ’lowest_model_level’ different matches might be found during the search. Furthermore,
the score of the matches depends on the chosen method for pose refinement. Also note that the higher ’low-
est_model_level’ is chosen the higher the portion of the refinement step with respect to the overall run-time of
find_shape_model_3d will be. As a consequence for higher values of ’lowest_model_level’ the influ-
ence of the generic parameter ’fast_pose_refinement’ (see above) on the runtime will increase. A large value
for ’lowest_model_level’ on the one hand may lead to long computation times of find_shape_model_3d
if ’fast_pose_refinement’ is switches off (’false’). On the other hand it may lead to a decreased accuracy if
’fast_pose_refinement’ is switches on (’true’) because in this mode the pose refinement is only approxi-
mated. Therefore, the value for ’lowest_model_level’ should be chosen as small as possible. Furthermore,
’lowest_model_level’ should be chosen small enough such that the edges of the 3D object model are still
observable on this level.
Suggested values: 1, 2, 3
Default: 1
’optimization’: For models with particularly large model views, it may be useful to reduce the number of model
points by setting ’optimization’ to a value different from ’none’. If ’optimization’ = ’none’, all model points
are stored. In all other cases, the number of points is reduced according to the value of ’optimization’.
If the number of points is reduced, it may be necessary in find_shape_model_3d to set the parame-
ter Greediness to a smaller value, e.g., 0.7 or 0.8. For models with small model views, the reduction
of the number of model points does not result in a speed-up of the search because in this case usually
significantly more potential instances of the model must be examined. If ’optimization’ is set to ’auto’,
create_shape_model_3d automatically determines the reduction of the number of model points for
each model view.
List of values: ’auto’, ’none’, ’point_reduction_low’, ’point_reduction_medium’, ’point_reduction_high’
Default: ’auto’
HALCON 24.11.1.0
116 CHAPTER 3 3D MATCHING
’metric’: This parameter determines the conditions under which the model is recognized in the image. If ’metric’
= ’ignore_part_polarity’, the contrast polarity is allowed to change only between different parts of the model,
whereas the polarity of model points that are within the same model part must not change. Please note that
the term ’ignore_part_polarity’ is capable of being misunderstood. It means that polarity changes between
neighboring model parts do not influence the score, and hence are ignored. Appropriate model parts are
automatically determined. The size of the parts can be controlled by the generic parameter ’part_size’, which
is described below. Note that this metric only works for one-channel images. Consequently, if the model
is created by using this metric and searched in a multi-channel image by using find_shape_model_3d
an error will be returned. If ’metric’ = ’ignore_local_polarity’, the model is found even if the contrast
polarity changes for each individual model point. This metric works for one-channel images as well as
for multi-channel images. The metric ’ignore_part_polarity’ should be used if the images contain strongly
textured backgrounds or clutter objects, which might result in wrong matches. Note that in general the scores
of the matches that are returned by find_shape_model_3d are lower for ’ignore_part_polarity’ than
for ’ignore_local_polarity’. This should be kept in mind when choosing the right value for the parameter
MinScore of find_shape_model_3d.
List of values: ’ignore_local_polarity’, ’ignore_part_polarity’
Default: ’ignore_local_polarity’
’part_size’: This parameter determines the size of the model parts that is used when ’metric’ is set to ’ig-
nore_part_polarity’ (see above). The size must be specified in pixels and should be approximately twice
as large as the size of the background texture in the image. For example, if an object should be found in front
of a chessboard with black and white squares of size 5 × 5 pixels, ’part_size’ should be set to 10. Note that
higher values of ’part_size’ might also decrease the scores of correct instances especially when searching for
objects with shiny or reflective surfaces. Therefore, the risk of missing correct instances might increase if
’part_size’ is set to a higher value. If ’metric’ is set to ’ignore_local_polarity’, the value of ’part_size’ is
ignored.
Suggested values: 2, 3, 4, 6, 8, 10
Default: 4
’min_face_angle’: 3D edges are only included in the shape representations of the views if the angle between
the two 3D faces that are incident with the 3D object model edge is at least ’min_face_angle’. If
’min_face_angle’ is set to 0.0, all edges are included. If ’min_face_angle’ is set to π (equivalent to 180
degrees), only the silhouette of the 3D object model is included. This parameter can be used to suppress
edges within curved surfaces, e.g., the surface of a cylinder or cone. Curved surfaces are approximated by
multiple planar faces. The edges between such neighboring planar faces should not be included in the shape
representation because they also do not appear in real images of the model. Thus, ’min_face_angle’ should
be set sufficiently high to suppress these edges. The effect of different values for ’min_face_angle’ can be
inspected by using project_object_model_3d before calling create_shape_model_3d. Note
that if edges that are not visible in the search image are included in the shape representation, the performance
(robustness and speed) of the matching may decrease considerably.
Suggested values: ’rad(10)’, ’rad(20)’, ’rad(30)’, ’rad(45)’
Default: ’rad(30)’
’min_size’: This value determines a threshold for the selection of significant model components based on the size
of the components, i.e., connected components that have fewer points than the specified minimum size are
suppressed. This threshold for the minimum size is divided by two for each successive pyramid level.
Suggested values: ’auto’, 0, 3, 5, 10, 20
Default: ’auto’
’model_tolerance’: The parameter specifies the tolerance of the projected 3D object model edges in the image,
given in pixels. The higher the value is chosen, the fewer views need to be generated. Consequently, a higher
value results in models that are less memory consuming and faster to find with find_shape_model_3d.
On the other hand, if the value is chosen too high, the robustness of the matching will decrease. Therefore,
this parameter should only be modified with care. For most applications, a good compromise between speed
and robustness is obtained when setting ’model_tolerance’ to 1.
Suggested values: 0, 1, 2
Default: 1
’union_adjacent_contours’: This parameter specifies if adjacent projected contours should be joined by
the operator project_shape_model_3d or not. Activating this option is equivalent to calling
union_adjacent_contours_xld afterwards, but significantly faster.
HALCON 24.11.1.0
118 CHAPTER 3 3D MATCHING
Graphs for Fast 3D Object Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp.
1902-1914, Oct., 2012.
Module
3D Metrology
deserialize_shape_model_3d (
: : SerializedItemHandle : ShapeModel3DID )
HALCON 24.11.1.0
120 CHAPTER 3 3D MATCHING
deviations of the pose parameters for each match. In contrast, if the generic parameter ’cov_pose_mode’ (see
below) was set to ’covariances’, CovPose contains the 36 values of the complete 6 × 6 covariance matrix of the 6
pose parameters. Note that this reflects only an inner accuracy from which the real accuracy of the pose may differ.
Finally, the score of each found instance is returned in Score. The score is a number between 0 and 1, which is
an approximate measure of how much of the model is visible in the image. If, for example, half of the model is
occluded, the score cannot exceed 0.5.
Input parameters in detail
Image and its domain: The domain of the image Image determines the search space for the reference point of
the 3D object model. There is no need to correct any distortions in Image as the calibration data has already
been provided during the model creation.
MinScore: The parameter MinScore determines what score a potential match must at least have to be regarded
as an instance of the model in the image. The larger MinScore is chosen, the faster the search is. If the
model can be expected never to be occluded in the images, MinScore may be set as high as 0.8 or even 0.9.
Note that in images with a high degree of clutter or strong background texture, MinScore should be set to
a value not much lower than 0.7 since otherwise false matches could be found.
Greediness: The parameter Greediness determines how “greedily” the search should be carried out. If
Greediness = 0, a safe search heuristic is used, which always finds the model if it is visible in the image.
However, the search will be relatively time consuming in this case. If Greediness = 1, an unsafe search
heuristic is used, which may cause the model not to be found in rare cases, even though it is visible in the
image. For Greediness = 1, the maximum search speed is achieved. In almost all cases, the 3D shape
model will always be found for Greediness = 0.9.
NumLevels: The number of pyramid levels used during the search is determined with NumLevels. If nec-
essary, the number of levels is clipped to the range given when the 3D shape model was created with
create_shape_model_3d. If NumLevels is set to 0, the number of pyramid levels specified in
create_shape_model_3d is used. Optionally, NumLevels can contain a second value that determines
the lowest pyramid level to which the found matches are tracked. Hence, a value of [4,2] for NumLevels
means that the matching starts at the fourth pyramid level and tracks the matches to the second lowest pyra-
mid level (the lowest pyramid level is denoted by a value of 1). This mechanism can be used to decrease
the runtime of the matching. If the lowest pyramid level to use is chosen too large, it may happen that the
desired accuracy cannot be achieved, or that wrong instances of the model are found because the model is
not specific enough on the higher pyramid levels to facilitate a reliable selection of the correct instance of the
model. In this case, the lowest pyramid level to use must be set to a smaller value.
GenParamName and GenParamValue: In addition to the parameters described above, there are some generic
parameters that can optionally be used to influence the matching. For most applications these parameters need
not to be specified but can be left at their default values. If desired, these parameters and their corresponding
values can be specified by using GenParamName and GenParamValue, respectively. The following
values for GenParamName are possible:
• If the pose range in which the model is to be searched is smaller than the pose range that was specified
during the model creation with create_shape_model_3d, the pose range can be restricted appro-
priately with the following parameters. If the values lie outside the pose range of the model, the values
are automatically clipped to the pose range of the model.
’longitude_min’: Sets the minimum longitude of the pose range.
Suggested values: ’rad(-45)’, ’rad(-30)’, ’rad(-15)’
Default: ’rad(-180)’
’longitude_max’: Sets the maximum longitude of the pose range.
Suggested values: ’rad(15)’, ’rad(30)’, ’rad(45)’
Default: ’rad(180)’
’latitude_min’: Sets the minimum latitude of the pose range.
Suggested values: ’rad(-45)’, ’rad(-30)’, ’rad(-15)’
Default: ’rad(-90)’
’latitude_max’: Sets the maximum latitude of the pose range.
Suggested values: ’rad(15)’, ’rad(30)’, ’rad(45)’
Default: ’rad(90)’
’cam_roll_min’: Sets the minimum camera roll angle of the pose range.
Suggested values: ’rad(-45)’, ’rad(-30)’, ’rad(-15)’
Default: ’rad(-180)’
’cam_roll_max’: Sets the maximum camera roll angle of the pose range.
Suggested values: ’rad(15)’, ’rad(30)’, ’rad(45)’
Default: ’rad(180)’
’dist_min’: Sets the minimum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default: 0
’dist_max’: Sets the maximum camera-object-distance of the pose range.
Suggested values: 0.05, 0.1, 0.5, 1.0
Default: (∞)
• Further generic parameters that do not concern the pose range can be specified:
’num_matches’: With this parameter the maximum number of instances to be found can be determined.
If more than the specified number of instances with a score greater than MinScore are found in the
image, only the best ’num_matches’ instances are returned. If fewer than ’num_matches’ are found,
only that number is returned, i.e., the parameter MinScore takes precedence over ’num_matches’.
If ’num_matches’ is set to 0, all matches that satisfy the score criterion are returned. Note that the
more matches should be found the slower the matching will be.
Suggested values: 0, 1, 2, 3
Default: 1
’max_overlap’: It may happen that multiple instances with similar positions but with different orien-
tations are found in the image. The parameter ’max_overlap’ determines by what fraction (i.e., a
number between 0 and 1) two instances may at most overlap in order to consider them as different
instances, and hence to be returned separately. If two instances overlap each other by more than
the specified value only the best instance is returned. The calculation of the overlap is based on the
smallest enclosing rectangle of arbitrary orientation (see smallest_rectangle2) of the found
instances. If in create_shape_model_3d for ’lowest_model_level’ a value larger than 1 was
passed, the overlap calculation is based on the projection of the smallest enclosing axis-parallel
cuboid of the 3D object model. Because in this case the overlap might be overestimated, in some
cases it could be necessary to increase the value for ’max_overlap’. If 0 max _overlap 0 = 0, the
found instances may not overlap at all, while for 0 max _overlap 0 = 1 all instances are returned.
Suggested values: 0.0, 0.2, 0.4, 0.6, 0.8, 1.0
Default: 0.5
’pose_refinement’: This parameter determines whether the poses of the instances should be refined af-
ter the matching. If ’pose_refinement’ is set to ’none’ the model’s pose is only determined with a
limited accuracy. In this case, the accuracy depends on several sampling steps that are used inside
the matching process and, therefore cannot be predicted very well. Therefore, ’pose_refinement’
should only be set to ’none’ when the computation time is of primary concern and an approxi-
mate pose is sufficient. In all other cases the pose should be determined through a least-squares
adjustment, i.e., by minimizing the distances of the model points to their corresponding image
points. In order to achieve a high accuracy, this refinement is directly performed in 3D. Therefore,
the refinement requires additional computation time. If the system variable (see set_system)
’opengl_hidden_surface_removal_enable’ is set to ’true’ (which is default if it is available) and the
model ShapeModel3DID was created with ’fast_pose_refinement’ set to ’false’, the projection of
the model in the pose refinement step is accelerated using the graphics card. Depending on the graph-
ics card this is significantly faster than the non accelerated algorithm. Be aware that the results of the
OpenGL projection are slightly different compared to the analytic projection. The different modes
for least-squares adjustment (’least_squares’, ’least_squares_high’, and ’least_squares_very_high’)
can be used to determine the accuracy with which the minimum distance is searched for. The higher
the accuracy is chosen, the longer the pose refinement will take, however. For most applications
’least_squares_high’ should be chosen because this results in the best trade-off between runtime
and accuracy. Note that the pose refinement can be sped up by passing ’fast_pose_refinement’ for
the parameter GenParamName of the operator create_shape_model_3d.
List of values: ’none’, ’least_squares’, ’least_squares_high’, ’least_squares_very_high’
Default: ’least_squares_high’
’recompute_score’: This parameter determines whether the score of the matches is recomputed after
the pose refinement. If ’recompute_score’ is set to ’false’, the score is returned that was computed
before the pose refinement. In some cases, however, the pose refinement changes the object pose by
more than one pixel in the image. Consequently, the original score does not appropriately describe
the refined match any longer. This could result in wrong matches obtaining high scores or perfect
HALCON 24.11.1.0
122 CHAPTER 3 3D MATCHING
matches obtaining low scores. To obtain a more meaningful score that reflects the pose changes due
to the pose refinement, the score can be recomputed after the pose refinement by setting ’recom-
pute_score’ to ’true’. Note that this might change the order of the matches as well as the selection
of matches that is returned. Also note that the recomputation of the score values needs additional
computation time. This increase of the run-time can be reduced by setting the generic parameter
’fast_pose_refinement’ of the operator create_shape_model_3d to ’true’.
List of values: ’false’, ’true’
Default: ’false’
’outlier_suppression’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. Then, in some cases it might be useful
to apply a robust outlier suppression during the least-squares adjustment. This might be necessary,
for example, if a high degree of clutter is present in the image, which prevents the least-squares
adjustment from finding the optimum pose. In this case, ’outlier_suppression’ should be set to
either ’medium’ (eliminates a medium proportion of outliers) or ’high’ (eliminates a high proportion
of outliers). However, in most applications, no robust outlier suppression is necessary, and hence,
’outlier_suppression’ can be set to ’none’. It should be noted that activating the outlier suppression
comes along with a significantly increasing computation time.
List of values: ’none’, ’medium’, ’high’
Default: ’none’
’cov_pose_mode’: This parameter only takes effect if ’pose_refinement’ is set to a value other than
’none’, and hence, a least-squares adjustment is performed. ’cov_pose_mode’ determines the mode
in which the accuracies that are computed during the least-squares adjustment are returned in
CovPose. If ’cov_pose_mode’ is set to ’standard_deviations’, the 6 standard deviations of the
6 pose parameters are returned for each match. In contrast, if ’cov_pose_mode’ is set to ’covari-
ances’, CovPose contains the 36 values of the complete 6 × 6 covariance matrix of the 6 pose
parameters.
List of values: ’standard_deviations’, ’covariances’
Default: ’standard_deviations’
’border_model’: The model is searched within those points of the domain of the image in which the
model lies completely within the image. This means that the model will not be found if it extends
beyond the borders of the image, even if it would achieve a score greater than MinScore. Note
that, if for a certain pyramid level the model touches the image border, it might not be found even
if it lies completely within the original image. As a rule of thumb, the model might not be found if
its distance to an image border falls below 2N umLevels−1 . This behavior can be changed by setting
’border_model’ to ’true’, which will cause models that extend beyond the image border to be found
if they achieve a score greater than MinScore. Here, points lying outside the image are regarded
as being occluded, i.e., they lower the score. It should be noted that the runtime of the search
will increase in this mode. Note further, that in rare cases, which occur typically only for artificial
images, the model might not be found also if for certain pyramid levels the model touches the border
of the reduced domain. Then, it may help to enlarge the reduced domain by 2N umLevels−1 using,
e.g., dilation_circle.
List of values: ’false’, ’true’
Default: ’false’
Parameters
Result
If the parameter values are correct, the operator find_shape_model_3d returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised. If the model was created with
find_shape_model_3d by setting ’metric’ to ’ignore_part_polarity’ and a multi-channel input image is
passed in Image, the error 3359 is raised.
Execution Information
HALCON 24.11.1.0
124 CHAPTER 3 3D MATCHING
Possible Predecessors
create_shape_model_3d, read_shape_model_3d
Possible Successors
project_shape_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
get_shape_model_3d_params ( : : ShapeModel3DID,
GenParamName : GenParamValue )
’cam_param’: Internal parameters of the camera that is used for the matching.
’ref_rot_x’: Reference orientation: Rotation around x-axis or x component of the Rodriguez vector (in radians or
without unit).
’ref_rot_y’: Reference orientation: Rotation around y-axis or y component of the Rodriguez vector (in radians or
without unit).
’ref_rot_z’: Reference orientation: Rotation around z-axis or z component of the Rodriguez vector (in radians or
without unit).
’order_of_rotation’: Meaning of the rotation values of the reference orientation.
’longitude_min’: Minimum longitude of the model views.
’longitude_max’: Maximum longitude of the model views.
’latitude_min’: Minimum latitude of the model views.
’latitude_max’: Maximum latitude of the model views.
’cam_roll_min’: Minimum camera roll angle of the model views.
’cam_roll_max’: Maximum camera roll angle of the model views.
’dist_min’: Minimum camera-object-distance of the model views.
’dist_max’: Maximum camera-object-distance of the model views.
’min_contrast’: Minimum contrast of the objects in the search images.
’num_levels’: User-specified number of pyramid levels.
’num_levels_max’: Maximum number of used pyramid levels over all model views.
’optimization’: Kind of optimization by reducing the number of model points.
’metric’: Match metric.
’part_size’: Size of the model parts that is used when ’metric’ is set to ’ignore_part_polarity’.
’min_face_angle’: Minimum 3D face angle for which 3D object model edges are included in the 3D shape model.
’min_size’: Minimum size of the projected 3D object model edge (in number of pixels) to include the projected
edge in the 3D shape model.
’model_tolerance’: Maximum acceptable tolerance of the projected 3D object model edges (in pixels).
’num_views_per_level’: Number of model views per pyramid level. For each pyramid level the number of views
that are stored in the 3D shape model are returned. Thus, the number of returned elements corresponds to the
number of used pyramid levels, which can be queried with ’num_levels_max’. Note that for pyramid levels
below ’lowest_model_level’ (see documentation of create_shape_model_3d), the value 0 is returned.
HALCON 24.11.1.0
126 CHAPTER 3 3D MATCHING
’reference_pose’: Reference position and orientation of the 3d shape model. The returned pose is in the form
rcs
Pmcs , where rcs denotes the reference coordinates system and mcs the model coordinate system (which
is a 3D world coordinate system), see Transformations / Poses and “Solution Guide III-C - 3D
Vision”. Hence, it describes the pose of the coordinate system that is used in the underlying 3D object
model relative to the internally used reference coordinate system of the 3D shape model. With this pose,
points given in the object coordinate system can be transformed into the reference coordinate system.
’reference_point’: 3D coordinates of the reference point of the underlying 3D object model.
’bounding_box1’: Smallest enclosing axis-parallel cuboid of the underlying 3D object model in the following
order: [min_x, min_y, min_z, max_x, max_y, max_z].
’fast_pose_refinement’: Describes whether the pose refinement during the search is performed in a sped up mode
(’true’) or in the conventional mode (’false’).
’lowest_model_level’: Lowest pyramid level down to which views are stored in the model.
’union_adjacent_contours’: Describes whether in project_shape_model_3d adjacent contours should be
joined or not.
A detailed description of the parameters can be looked up with the operator create_shape_model_3d.
It is possible to query the values of several parameters with a single operator call by passing a tuple containing the
names of all desired parameters to GenParamName. As a result a tuple of the same length with the corresponding
values is returned in GenParamValue. Note that this is solely possible for parameters that return only a single
value.
Parameters
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters that are to be queried for the 3D shape model.
Default: ’num_levels_max’
List of values: GenParamName ∈ {’cam_param’, ’ref_rot_x’, ’ref_rot_y’, ’ref_rot_z’, ’order_of_rotation’,
’longitude_min’, ’longitude_max’, ’latitude_min’, ’latitude_max’, ’cam_roll_min’, ’cam_roll_max’,
’dist_min’, ’dist_max’, ’min_contrast’, ’num_levels’, ’num_levels_max’, ’optimization’, ’metric’, ’part_size’,
’min_face_angle’, ’min_size’, ’model_tolerance’, ’num_views_per_level’, ’reference_pose’,
’reference_point’, ’bounding_box1’, ’fast_pose_refinement’, ’lowest_model_level’,
’union_adjacent_contours’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string / integer / real
Values of the generic parameters.
Result
If the parameters are valid, the operator get_shape_model_3d_params returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information
HALCON 24.11.1.0
128 CHAPTER 3 3D MATCHING
Possible Predecessors
create_shape_model_3d, read_shape_model_3d, get_shape_model_3d_params,
find_shape_model_3d
Alternatives
project_object_model_3d
See also
convert_point_3d_cart_to_spher, convert_point_3d_spher_to_cart,
create_cam_pose_look_at_point, trans_pose_shape_model_3d
Module
3D Metrology
serialize_shape_model_3d (
: : ShapeModel3DID : SerializedItemHandle )
Transform a pose that refers to the coordinate system of a 3D object model to a pose that refers to the reference
coordinate system of a 3D shape model and vice versa.
The operator trans_pose_shape_model_3d transforms the pose PoseIn into the pose PoseOut by using
the transformation direction specified in Transformation. In the majority of cases, the operator will be used
to transform a camera pose that is given relative to the source coordinate system to a camera pose that refers to the
target coordinate system.
The pose can be transformed between two coordinate systems. The first coordinate system is the reference co-
ordinate system of the 3D shape model (ref ) that is passed in ShapeModel3DID. The origin of the reference
coordinate system lies at the reference point of the underlying 3D object model. The orientation of the reference
coordinate system is determined by the reference orientation that was specified when creating the 3D shape model
with create_shape_model_3d.
The second coordinate system is the world coordinate system, i.e., the coordinate system of the 3D object model
(mcs) that underlies the 3D shape model. This coordinate system is implicitly determined by the coordinates that
are stored in the CAD file that was read by using read_object_model_3d.
If Transformation is set to ’ref_to_model’, it is assumed that PoseIn refers to the reference coordinate
system of the 3D shape model. Thus, PoseIn is cs Prcs , where cs denotes the coordinate system the input pose
transforms into (e.g., the camera coordinate system). For further information we refer to Transformations / Poses
and “Solution Guide III-C - 3D Vision”. The resulting output pose PoseOut in this case refers to
the coordinate system of the 3D object model, thus cs Pmcs .
If Transformation is set to ’model_to_ref’, it is assumed that PoseIn refers to the coordinate system of
the 3D object model, cs Pmcs . The resulting output pose PoseOut in this case refers to the reference coordinate
system of the 3D shape model, thus cs Prcs .
The relative pose of the two coordinate systems can be queried by passing ’reference_pose’ for GenParamName
in the operator get_shape_model_3d_params.
HALCON 24.11.1.0
130 CHAPTER 3 3D MATCHING
Parameters
. ShapeModel3DID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shape_model_3d ; handle
Handle of the 3D shape model.
. PoseIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose to be transformed in the source system.
. Transformation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Direction of the transformation.
Default: ’ref_to_model’
List of values: Transformation ∈ {’ref_to_model’, ’model_to_ref’}
. PoseOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Transformed 3D pose in the target system.
Result
If the parameters are valid, the operator trans_pose_shape_model_3d returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information
3.6 Surface-Based
clear_surface_matching_result ( : : SurfaceMatchingResultID : )
• SurfaceMatchingResultID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_surface_model, refine_surface_model_pose
See also
find_surface_model, refine_surface_model_pose
Module
3D Metrology
clear_surface_model ( : : SurfaceModelID : )
HALCON 24.11.1.0
132 CHAPTER 3 3D MATCHING
The sampled points are used for finding the object model in a scene by using the operator
find_surface_model. For this, all possible pairs of points from the point set are examined, and the distance
and relative surface orientation of each pair is computed. Both values are discretized and stored for matching.
The generic parameters ’feat_step_size_rel’ and ’feat_angle_resolution’ can be used to set the discretization of the
distance and the orientation angles, respectively (see below).
The 3D object model is sampled a second time for the pose refinement. The second sampling is done with a
smaller sampling distance, leading to more points. The generic parameter ’pose_ref_rel_sampling_distance’ sets
the sampling distance relative to the object’s diameter. Decreasing the value results in a more accurate pose
refinement but a larger model and a slower model generation and matching. Increasing the value leads to a less
accurate pose refinement but a smaller model and faster model generation and matching (see below).
Surface-based matching can additionally use 3D edges to improve the alignment. This is particularly helpful for ob-
jects that are planar or contain larger planar sides, such that they are found in incorrect rotations or in a background
plane. In order to allow find_surface_model to also align edges, the surface model must be trained by setting
the generic parameter ’train_3d_edges’ to ’true’. In this case, the model must contain a triangular or polygon mesh
where the order of the points results in normals that point inwards. Also, the training for edge-supported surface-
based matching requires OpenGL 2.1, GLSL 1.2, and the OpenGL extensions GL_EXT_framebuffer_object and
GL_EXT_framebuffer_blit. Note that the training can take significantly longer than without edge-support.
Additionally, the model can be prepared to support view-based score computation. This is particularly helpful
for models where only a small part of the 3D object model is visible, which results in low scores if the ratio to
the total number of points is used. Accordingly, the view-based score is computed using the ratio of the matched
points to the maximum number of potentially visible model points from a certain viewpoint. In order to al-
low find_surface_model to compute a view-based score, the surface model must be trained by setting the
generic parameter ’train_view_based’ to ’true’. Similar to ’train_3d_edges’, the model must contain a triangular
or polygon mesh where the order of the points results in normals that point inwards.
Note that using noisy data for the creation of your 3D object model results in the computation of deficient surface
normals. Especially when the model is prepared for the use with 3D edges or the support of view-based score, this
can lead to unreliable scores. In order to reduce noisy 3D data you can, e.g., use smooth_object_model_3d
or simplify_object_model_3d.
The generic parameter pair GenParamName and GenParamValue are used to set additional parameters
for the model generation. GenParamName contains the tuple of parameter names that shall be set and
GenParamValue contains the corresponding values. The following values are possible for GenParamName:
’model_invert_normals’: Invert the orientation of the surface normals of the model. The normal orientation needs
to be known for the model generation. If both the model and the scene are acquired with the same setup, the
normals will already point in the same direction. If the model was loaded from a CAD file, the normals might
point into the opposite direction. If you experience the effect that the model is found on the ’outside’ of the
scene surface and the model was created from a CAD file, try to set this parameter to ’true’. Also, make
sure that the normals in the CAD file all point either outward or inward, i.e., are oriented consistently. The
normal direction is irrelevant for the pose refinement of the surface model. Therefore, if the object model is
only used with the operator refine_surface_model_pose, the value of ’model_invert_normals’ has
no effect on the result.
List of values: ’false’, ’true’
Default: ’false’
’pose_ref_rel_sampling_distance’: Set the sampling distance for the pose refinement relative to the object’s di-
ameter. Decreasing this value leads to a more accurate pose refinement but a larger model and slower model
generation and refinement. Increasing the value leads to a less accurate pose refinement but a smaller model
and faster model generation and matching.
Suggested values: 0.05, 0.02, 0.01, 0.005
Default: 0.01
Restriction: 0 < ’pose_ref_rel_sampling_distance’ < 1
’feat_step_size_rel’: Set the discretization distance of the point pair distance relative to the object’s diameter. This
value defaults to the value of RelSamplingDistance. It is not recommended to change this value. For
very noisy scenes, the value can be increased to improve the robustness of the matching against noisy points.
Suggested values: 0.1, 0.05, 0.03
Default: Value of RelSamplingDistance
Restriction: 0 < ’feat_step_size_rel’ < 1
HALCON 24.11.1.0
134 CHAPTER 3 3D MATCHING
’feat_angle_resolution’: Set the discretization of the point pair orientation as the number of subdivisions of the
angle. It is recommended to not change this value. Increasing the value increases the precision of the
matching but decreases the robustness against incorrect normal directions. Decreasing the value decreases
the precision of the matching but increases the robustness against incorrect normal directions. For very noisy
scenes where the normal directions can not be computed accurately, the value can be set to 25 or 20.
Suggested values: 20, 25, 30
Default: 30
Restriction: ’feat_angle_resolution’ > 1
’train_3d_edges’: Enable the training for edge-supported surface-based matching and refinement. In this case the
model must contain a mesh, i.e. triangles or polygons. Also, it is important that the computed normal vectors
point inwards. This parameter requires OpenGL.
List of values: ’false’, ’true’
Default: ’false’
’train_view_based’: Enable the training for view-based score computation for surface-based matching and refine-
ment. In this case the model must contain a mesh, i.e. triangles or polygons. Also, it is important that the
computed normal vectors point inwards. This parameter requires OpenGL.
List of values: ’false’, ’true’
Default: ’false’
’train_self_similar_poses’: Prepares the surface model for optimizations regarding self-similar, almost symmetric
poses. For this, poses are found under which the model is very similar to itself, i.e., poses that can be
distinguished only by very small properties of the model (such as boreholes) and that can be confused by
find_surface_model. When calling find_surface_model, it will automatically be determined
which of those self-similar poses are correct.
List of values: ’false’, ’true’
Default: ’false’
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Sampling distance relative to the object’s diameter
Default: 0.03
Suggested values: RelSamplingDistance ∈ {0.1, 0.05, 0.03, 0.02, 0.01}
Restriction: 0 < RelSamplingDistance < 1
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters.
Default: []
Suggested values: GenParamName ∈ {’model_invert_normals’, ’pose_ref_rel_sampling_distance’,
’feat_step_size_rel’, ’feat_angle_resolution’, ’train_3d_edges’, ’train_view_based’,
’train_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.02, 0.05, 0.1}
. SurfaceModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . surface_model ; handle
Handle of the surface model.
Result
create_surface_model returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
surface_normals_object_model_3d
Possible Successors
find_surface_model, refine_surface_model_pose, get_surface_model_param,
write_surface_model, clear_surface_model, set_surface_model_param
Alternatives
read_surface_model
See also
find_surface_model, refine_surface_model_pose, read_surface_model,
write_surface_model, clear_surface_model, set_surface_model_param
References
Bertram Drost, Markus Ulrich, Nassir Navab, Slobodan Ilic: “Model Globally, Match Locally: Efficient and
Robust 3D Object Recognition.” Computer Vision and Pattern Recognition, pp. 998-1005, 2010.
Module
3D Metrology
deserialize_surface_model (
: : SerializedItemHandle : SurfaceModelID )
HALCON 24.11.1.0
136 CHAPTER 3 3D MATCHING
1. Approximate matching
2. Sparse pose refinement
3. Dense pose refinement
These steps are described in more detail in the technical note Surface-Based Matching. The generic pa-
rameters used to control these steps are described in the respective sections below. The further paragraphs describe
the parameters and mention further points to note.
The matching process and the parameters can be visualized and inspected using the HDevelop procedure
debug_find_surface_model.
Points to Note
Matching the surface model uses points and normals of the 3D scene ObjectModel3D. The scene shall provide
one of the following options:
It is important for an accurate Pose that the normals of the scene and the model point in the same direction (see
’scene_invert_normals’).
If the model was trained for edge-supported surface-based matching and the edge-supported matching has not been
turned off via ’use_3d_edges’, only the second combination is possible, i.e., the scene must contain a 2D mapping.
If the model was trained for edge-supported surface-based matching and the scene contains a mapping, normals
contained in the input point cloud are not used (see ’scene_normal_computation’ below).
Further, for models which were trained for edge-supported surface-based matching it is necessary that the normal
vectors point inwards.
Note that triangles or polygons in the passed scene are ignored. Instead, only the vertices are used for matching. It
is thus in general not recommended to use this operator on meshed scenes, such as CAD data. Instead, such a scene
must be sampled beforehand using sample_object_model_3d to create points and normals (e.g., using the
method ’fast_compute_normals’).
When using noisy point clouds, e.g., from time-of-flight cameras, the generic parameter
’scene_normal_computation’ could be set to ’mls’ in order to obtain more robust results (see below).
Parameter Description
SurfaceModelID is the handle of the surface model. The model must have been created previously
with create_surface_model or read in with read_surface_model, respectively. Certain sur-
face model parameters influencing the matching can be set using set_surface_model_param, such as
’pose_restriction_max_angle_diff’ restricting the allowed range of rotations.
ObjectModel3D is the handle of the 3D object model containing the scene in which the matches are searched.
Note that in most cases, it is assumed the scene was observed from a camera looking along the z-axis. This is
important to align the scene normals if they are re-computed (see ’scene_normal_computation’ below). In contrast,
when the model was trained for edge-supported surface-based matching and the scene contains a mapping, normals
are automatically aligned consistently.
The parameter RelSamplingDistance controls the sampling distance during the step Approximate
matching and the Score calculation during the step Sparse pose refinement. Its value is given rela-
tive to the diameter of the surface model. Decreasing RelSamplingDistance leads to more sampled points,
and in turn to a more stable but slower matching. Increasing RelSamplingDistance reduces the number of
sampled scene points, which leads to a less stable but faster matching. For an illustration showing different values
for RelSamplingDistance, please refer to the operator create_surface_model. The sampled scene
points can be retrieved for a visual inspection using the operator get_surface_matching_result. For a
robust matching it is recommended that at least 50-100 scene points are sampled for each object instance.
The parameter KeyPointFraction controls how many points out of the sampled scene points are selected
as key points. For example, if the value is set to 0.1, 10% of the sampled scene points are used as key points.
For stable results it is important that each instance of the object is covered by several key points. Increasing
KeyPointFraction means that more key points are selected from the scene, resulting in a slower but more
stable matching. Decreasing KeyPointFraction has the inverse effect and results in a faster but less stable
matching. The operator get_surface_matching_result can be used to retrieve the selected key points for
visual inspection.
The parameter MinScore can be used to filter the results. Only matches with a score exceeding the value of
MinScore are returned. If MinScore is set to zero, all matches are returned.
For edged-supported surface-based matching (see create_surface_model) four different sub-scores are de-
termined (see their explanation below). For surface-based matching models where view-based score computation
is trained (see create_surface_model), an additional fifth sub-score is determined. As a consequence, you
can filter the results based on each of them by passing a tuple with up to five threshold values to MinScore. These
threshold values are sorted in the order of the scores (see below) and missing entries are regarded as 0, meaning no
filtering based on this sub-score. To find suitable values for the thresholds, the corresponding sub-scores of found
object instances can be obtained using get_surface_matching_result. Depending on the settings, not all
sub-scores might be available. The thresholds for unavailable sub-scores are ignored. The five sub-scores, whose
threshold values have to be passed in exactly this order in MinScore, are:
The parameter ReturnResultHandle determines if a surface matching result handle is returned or not. If the
parameter is set to ’true’, the handle is returned in the parameter SurfaceMatchingResultID. Additional
details of the matching process can be queried with the operator get_surface_matching_result using
that handle.
The parameters GenParamName and GenParamValue are used to set generic parameters. Both get a tuple
of equal length, where the tuple passed to GenParamName contains the names of the parameters to set, and the
tuple passed to GenParamValue contains the corresponding values. The possible parameter names and values
are described in the paragraph The three steps of the matching.
The output parameter Pose gives the 3D poses of the found object instances. For every found instance of the
surface model its pose is given in the scene coordinate system, thus the pose is in the form scs Pmcs , where scs
denote the coordinate system of the scene (which often is identical with the coordinate system of the sensor, the
camera coordinate system) and mcs the model coordinate system (which is a 3D world coordinate system), see
Transformations / Poses and “Solution Guide III-C - 3D Vision”. Thereby, the pose refers to the
original coordinate system of the 3D object model that was passed to create_surface_model.
The output parameter Score returns a score for each match. Its value and interpretation differs for the cases
distinguished below.
HALCON 24.11.1.0
138 CHAPTER 3 3D MATCHING
The output parameter SurfaceMatchingResultID returns a handle for the surface matching re-
sult. Using this handle, additional details of the matching process can be queried with the operator
get_surface_matching_result. Note, that in order to return the handle, ReturnResultHandle has
to be set to ’true’.
The Three Steps of the Matching
The matching is divided into three steps:
1. Approximate matching The approximate poses of the instances of the surface model in the scene are searched.
The following generic parameters control the approximate matching and can be set with GenParamName
and GenParamValue:
’num_matches’: Sets the maximum number of matches that are returned.
Suggested values: 1, 2, 5
Default: 1
Restriction: ’num_matches’ > 0
’max_overlap_dist_rel’: For efficiency reasons, the maximum overlap can not be defined in 3D. Instead,
only the minimum distance between the centers of the axis-aligned bounding boxes of two matches can
be specified with ’max_overlap_dist_rel’. The value is set relative to the diameter of the object. Once
an object with a high Score is found, all other matches are suppressed if the centers of their bounding
boxes lie too close to the center of the first object. If the resulting matches must not overlap, the value
for ’max_overlap_dist_rel’ should be set to 1.0.
Note that only one of the parameters ’max_overlap_dist_rel’ and ’max_overlap_dist_abs’ should be set.
If both are set, only the value of the last modified parameter is used.
Suggested values: 0.1, 0.5, 1
Default: 0.5
Restriction: ’max_overlap_dist_rel’ >= 0
’max_overlap_dist_abs’: This parameter has the same effect as the parameter ’max_overlap_dist_rel’. Note
that in contrast to ’max_overlap_dist_rel’, the value for ’max_overlap_dist_abs’ is set as an absolute
value. See ’max_overlap_dist_rel’ above, for a description of the effect of this parameter.
Note that only one of the parameters ’max_overlap_dist_rel’ and ’max_overlap_dist_abs’ should be set.
If both are set, only the value of the last modified parameter is used.
Suggested values: 1, 2, 3
Restriction: ’max_overlap_dist_abs’ >= 0
’scene_normal_computation’: This parameter controls the normal computation of the sampled scene.
In the default mode ’fast’, in most cases normals from the 3D scene are used (if it already contains
normals) or computed based on a small neighborhood of points (if not). The computed normals n are
then oriented such that nz ≥ 0 in case no original normals exist. This orientation of nz ≥ 0 implies the
assumption that the scene was observed from a camera looking along the z-axis.
In the default mode ’fast’, in case the model was trained for edge-supported surface-based matching and
the scene contains a mapping, input normals are not used and normals are always computed from the
mapping contained in the 3D scene. Further, the computed normals are oriented inwards consistently
with respect to the mapping.
In the mode ’mls’, normals are recomputed based on a larger neighborhood and using the more complex
but often more accurate ’mls’ method. A more detailed description of the ’mls’ method can be found
in the description of the operator surface_normals_object_model_3d. The ’mls’ mode is in-
tended for noisy data, such as images from time-of-flight cameras. The recomputed normals are oriented
as the normals in mode ’fast’.
List of values: ’fast’, ’mls’
Default: ’fast’
’scene_invert_normals’: Invert the orientation of the surface normals of the scene. The orientation of surface
normals of the scene have to match with the orientation of the model. If both the model and the scene are
acquired with the same setup, the normals will already point in the same direction. If you experience the
effect that the model is found on the ’outside’ of the scene surface, try to set this parameter to ’true’. Also,
make sure that the normals in the scene all point either outward or inward, i.e., are oriented consistently.
For edge-supported surface-based matching, the normal vectors have to point inwards, but typically are
automatically generated flipped inwards consistently with respect to the mapping. The orientation of the
normals can be inspected using the procedure debug_find_surface_model.
List of values: ’false’, ’true’
Default: ’false’
’3d_edges’: Allows to manually set the 3D scene edges for edge-supported surface-based matching, i.e. if
the surface model was created with ’train_3d_edges’ enabled. The parameter must be a 3D object model
handle. The edges are usually a result of the operator edges_object_model_3d but can further
be filtered in order to remove outliers. If this parameter is not given, find_surface_model will
internally extract the edges similar to the operator edges_object_model_3d.
’3d_edge_min_amplitude_rel’: Sets the threshold when extracting 3D edges for edge-supported surface-
based matching, i.e. if the surface model was created with ’train_3d_edges’ enabled. The threshold
is set relative to the diameter of the object. Note that if edges were passed manually with the generic
parameter ’3d_edges’, this parameter is ignored. Otherwise, it behaves identically to the parameter
MinAmplitude of operator edges_object_model_3d.
Suggested values: 0.05, 0.1, 0.5
Default: 0.05
Restriction: ’3d_edge_min_amplitude_rel’ >= 0
’3d_edge_min_amplitude_abs’: Similar to ’3d_edge_min_amplitude_rel’, however, the value is given as ab-
solute distance and not relative to the object diameter.
Restriction: ’3d_edge_min_amplitude_abs’ >= 0
’viewpoint’: This parameter specifies the viewpoint from which the 3D data is seen. It is used for surface
models that are prepared for view-based score computation (i.e. with ’train_view_based’ enabled) to get
the maximum number of potentially visible points of the model based on the current viewpoint. For this,
GenParamValue must contain a string consisting of the three coordinates (x, y, and z) of the view-
HALCON 24.11.1.0
140 CHAPTER 3 3D MATCHING
point, separated by spaces. The viewpoint is defined in the same coordinate frame as ObjectModel3D
and should roughly correspond to the position the scene was acquired from. A visualization of the
viewpoint can be created using the procedure debug_find_surface_model in order to inspect its
position.
Default: ’0 0 0’
’max_gap’: Gaps in the 3D data are closed, as far as they do not exceed the maximum gap size ’max_gap’
[pixels] and the surface model was created with ’train_3d_edges’ enabled. Larger gaps will contain
edges at their boundary, while gaps smaller than this value will not. This suppresses edges around
smaller patches that were not reconstructed by the sensor as well as edges at the more distant part of a
discontinuity. For sensors with very large resolutions, the value should be increased to avoid spurious
edges. Note that if edges were passed manually with the generic parameter ’3d_edges’, this param-
eter is ignored. Otherwise, it behaves identically to the parameter GenParamName of the operator
edges_object_model_3d when ’max_gap’ is set.
The influence of ’max_gap’ can be inspected using the procedure debug_find_surface_model.
Default: 30
’use_3d_edges’: Turns the edge-supported matching on or off. This can be used to perform matching without
3D edges, even though the model was created for edge-supported matching. If the model was not created
for edge-supported surface-based matching, an error is returned.
List of values: ’true’, ’false’
Default: ’true’
2. Sparse pose refinement In this second step, the approximate poses found in the previous step are further re-
fined. This increases the accuracy of the poses and the significance of the score value.
The following generic parameters control the sparse pose refinement and can be set with GenParamName
and GenParamValue:
’sparse_pose_refinement’: Enables or disables the sparse pose refinement.
List of values: ’true’, ’false’
Default: ’true’
’pose_ref_use_scene_normals’: Enables or disables the usage of scene normals for the pose refinement. If
this parameter is enabled, and if the scene contains point normals, then those normals are used to increase
the accuracy of the pose refinement. For this, the influence of scene points whose normal points in a
different direction than the model normal is decreased. Note that the scene must contain point normals.
Otherwise, this parameter is ignored.
List of values: ’true’, ’false’
Default: ’false’
’use_view_based’: Turns the view-based score computation for surface-based matching on or off. This can
be used to perform matching without using the view-based score, even though the model was prepared
for view-based score computation. The influence of ’use_view_based’ on the score is explained in the
documentation of Score above.
If the model was not prepared for view-based score computation, an error is returned.
List of values: ’true’, ’false’
Default: ’false’, if ’train_view_based’ was disabled when creating the model, otherwise ’true’.
3. Dense pose refinement Accurately refines the poses found in the previous steps.
The following generic parameters influence the accuracy and speed of the dense pose refinement and can be
set with GenParamName and GenParamValue:
’dense_pose_refinement’: Enables or disables the dense pose refinement.
List of values: ’true’, ’false’
Default: ’true’
’pose_ref_num_steps’: Number of iterations for the dense pose refinement. Increasing the number of itera-
tion leads to a more accurate pose at the expense of runtime. However, once convergence is reached, the
accuracy can no longer be increased, even if the number of steps is increased. Note that this parameter
is ignored if the dense pose refinement is disabled.
Suggested values: 1, 3, 5, 20
Default: 5
Restriction: ’pose_ref_num_steps’ > 0
’pose_ref_sub_sampling’: Set the rate of scene points to be used for the dense pose refinement. For example,
if this value is set to 5, every 5th point from the scene is used for pose refinement. This parameter allows
an easy trade-off between speed and accuracy of the pose refinement: Increasing the value leads to less
points being used and in turn to a faster but less accurate pose refinement. Decreasing the value has the
inverse effect. Note that this parameter is ignored if the dense pose refinement is disabled.
Suggested values: 1, 2, 5, 10
Default: 2
Restriction: ’pose_ref_sub_sampling’ > 0
’pose_ref_dist_threshold_rel’: Set the distance threshold for dense pose refinement relative to the diameter
of the surface model. Only scene points that are closer to the object than this distance are used for the
optimization. Scene points further away are ignored.
Note that only one of the parameters ’pose_ref_dist_threshold_rel’ and ’pose_ref_dist_threshold_abs’
should be set. If both are set, only the value of the last modified parameter is used. Note that this
parameter is ignored if the dense pose refinement is disabled.
Suggested values: 0.03, 0.05, 0.1, 0.2
Default: 0.1
Restriction: ’pose_ref_dist_threshold_rel’ > 0
’pose_ref_dist_threshold_abs’: Set the distance threshold for dense pose refinement as an absolute value.
See ’pose_ref_dist_threshold_rel’ for a detailed description.
Note that only one of the parameters ’pose_ref_dist_threshold_rel’ and ’pose_ref_dist_threshold_abs’
should be set. If both are set, only the value of the modified last parameter is used.
Restriction: ’pose_ref_dist_threshold_abs’ > 0
’pose_ref_scoring_dist_rel’: Set the distance threshold for scoring relative to the diameter of the surface
model. See the following ’pose_ref_scoring_dist_abs’ for a detailed description.
Note that only one of the parameters ’pose_ref_scoring_dist_rel’ and ’pose_ref_scoring_dist_abs’
should be set. If both are set, only the value of the last modified parameter is used. Note that this
parameter is ignored if the dense pose refinement is disabled.
Suggested values: 0.2, 0.01, 0.005, 0.0001
Default: 0.005
Restriction: ’pose_ref_scoring_dist_rel’ > 0
’pose_ref_scoring_dist_abs’: Set the distance threshold for scoring. Only scene points that are closer to the
object than this distance are considered to be ’on the model’ when computing the score after the pose
refinement. All other scene points are considered not to be on the model. The value should correspond
to the amount of noise on the coordinates of the scene points. Note that this parameter is ignored if the
dense pose refinement is disabled.
Note that only one of the parameters ’pose_ref_scoring_dist_rel’ and ’pose_ref_scoring_dist_abs’
should be set. If both are set, only the value of the last modified parameter is used.
’pose_ref_use_scene_normals’: Enables or disables the usage of scene normals for the pose refinement. This
parameter is explained in more details in the section Sparse pose refinement above.
List of values: ’true’, ’false’
Default: ’false’
’pose_ref_dist_threshold_edges_rel’: Set the distance threshold of edges for dense pose refinement relative
to the diameter of the surface model. Only scene edges that are closer to the object edges than this
distance are used for the optimization. Scene edges further away are ignored.
Note that only one of the parameters ’pose_ref_dist_threshold_edges_rel’ and
’pose_ref_dist_threshold_edges_abs’ should be set. If both are set, only the value of the last
modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled
or if no edge-supported surface-based matching is used.
Suggested values: 0.03, 0.05, 0.1, 0.2
Default: 0.1
Restriction: ’pose_ref_dist_threshold_edges_rel’ > 0
’pose_ref_dist_threshold_edges_abs’: Set the distance threshold of edges for dense pose refinement as an
absolute value. See ’pose_ref_dist_threshold_edges_rel’ for a detailed description.
Note that only one of the parameters ’pose_ref_dist_threshold_edges_rel’ and
’pose_ref_dist_threshold_edges_abs’ should be set. If both are set, only the value of the last
modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled
or if no edge-supported surface-based matching is used.
Restriction: ’pose_ref_dist_threshold_edges_abs’ > 0
’pose_ref_scoring_dist_edges_rel’: Set the distance threshold of edges for scoring relative to the diameter
of the surface model. See the following ’pose_ref_scoring_dist_edges_abs’ for a detailed description.
Note that only one of the parameters ’pose_ref_scoring_dist_edges_rel’ and
HALCON 24.11.1.0
142 CHAPTER 3 3D MATCHING
’pose_ref_scoring_dist_edges_abs’ should be set. If both are set, only the value of the last modi-
fied parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if
no edge-supported surface-based matching is used.
Suggested values: 0.2, 0.01, 0.005, 0.0001
Default: 0.005
Restriction: ’pose_ref_scoring_dist_edges_rel’ > 0
’pose_ref_scoring_dist_edges_abs’: Set the distance threshold of edges for scoring as an absolute value.
Only scene edges that are closer to the object edges than this distance are considered to be ’on the
model’ when computing the score after the pose refinement. All other scene edges are considered not to
be on the model. The value should correspond to the expected inaccuracy of the extracted scene edges
and the inaccuracy of the refined pose.
Note that only one of the parameters ’pose_ref_scoring_dist_edges_rel’ and
’pose_ref_scoring_dist_edges_abs’ should be set. If both are set, only the value of the last modi-
fied parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if
no edge-supported surface-based matching is used.
Restriction: ’pose_ref_scoring_dist_edges_abs’ > 0
’use_view_based’: Turns the view-based score computation for surface-based matching on or off. For further
details, see the respective description in the section about the sparse pose refinement above.
If the model was not prepared for view-based score computation, an error is returned.
List of values: ’true’, ’false’
Default: ’false’, if ’train_view_based’ was disabled when creating the model, otherwise ’true’.
’use_self_similar_poses’: Turns the optimization regarding self-similar, almost symmetric poses on or off.
If the model was not created with activated parameter ’train_self_similar_poses’, an error is returned
when setting ’use_self_similar_poses’ to ’true’.
List of values: ’true’, ’false’
Default: ’false’, if ’train_self_similar_poses’ was disabled when creating the model, otherwise ’true’.
Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scene sampling distance relative to the diameter of the surface model.
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.07, 0.05, 0.04, 0.03}
Restriction: 0 < RelSamplingDistance < 1
. KeyPointFraction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Fraction of sampled scene points used as key points.
Default: 0.2
Suggested values: KeyPointFraction ∈ {0.3, 0.2, 0.1, 0.05}
Restriction: 0 < KeyPointFraction <= 1
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Minimum score of the returned poses.
Default: 0
Restriction: MinScore >= 0
. ReturnResultHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable returning a result handle in SurfaceMatchingResultID.
Default: ’false’
Suggested values: ReturnResultHandle ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’num_matches’, ’max_overlap_dist_rel’, ’max_overlap_dist_abs’,
’sparse_pose_refinement’, ’dense_pose_refinement’, ’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_rel’,
’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’, ’scene_normal_computation’,
’scene_invert_normals’, ’3d_edge_min_amplitude_rel’, ’3d_edge_min_amplitude_abs’, ’viewpoint’,
HALCON 24.11.1.0
144 CHAPTER 3 3D MATCHING
’min_contrast’: Sets the minimum contrast of the object in the search images. Edges with a contrast below this
threshold are ignored in the refinement.
Suggested values: 5, 10, 20
Default: 10
Restriction: ’min_contrast’ >= 0
’max_deformation’: Sets the search range in pixels for corresponding edges in the image. This parameter
can be used if the shape of the object is slightly deformed compared to the original 3D model used in
create_surface_model. Note that increasing this parameter can have a significant impact on the run-
time of the refinement.
Suggested values: 0, 1, 5
Default: 1
Restriction: ’max_deformation’ >= 0
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / uint2
Images of the scene.
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. RelSamplingDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scene sampling distance relative to the diameter of the surface model.
Default: 0.05
Suggested values: RelSamplingDistance ∈ {0.1, 0.07, 0.05, 0.04, 0.03}
Restriction: 0 < RelSamplingDistance < 1
. KeyPointFraction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Fraction of sampled scene points used as key points.
Default: 0.2
Suggested values: KeyPointFraction ∈ {0.3, 0.2, 0.1, 0.05}
Restriction: 0 < KeyPointFraction <= 1
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Minimum score of the returned poses.
Default: 0
Restriction: MinScore >= 0
. ReturnResultHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable returning a result handle in SurfaceMatchingResultID.
Default: ’false’
Suggested values: ReturnResultHandle ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’num_matches’, ’max_overlap_dist_rel’, ’max_overlap_dist_abs’,
’sparse_pose_refinement’, ’dense_pose_refinement’, ’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_rel’,
’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’, ’scene_normal_computation’,
’scene_invert_normals’, ’3d_edge_min_amplitude_rel’, ’3d_edge_min_amplitude_abs’, ’viewpoint’,
’max_gap’, ’3d_edges’, ’max_deformation’, ’min_contrast’, ’use_3d_edges’, ’use_view_based’,
’use_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.03, 0.05, 0.1,
’num_scene_points’, ’model_point_fraction’, ’num_model_points’, ’fast’, ’mls’}
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_surface_model, create_surface_model, get_surface_model_param,
edges_object_model_3d
Possible Successors
refine_surface_model_pose, get_surface_matching_result,
clear_surface_matching_result, clear_object_model_3d
Alternatives
refine_surface_model_pose, find_surface_model, refine_surface_model_pose_image
See also
refine_surface_model_pose, find_surface_model
Module
3D Metrology
get_surface_matching_result ( : : SurfaceMatchingResultID,
ResultName, ResultIndex : ResultValue )
’sampled_scene’: A 3D object model handle is returned that contains the sampled scene points that were used
in the approximate matching step. This is helpful for tuning the sampling distance for the matching (see
parameter RelSamplingDistance of operator find_surface_model). The parameter ResultIndex is
ignored.
’key_points’: A 3D object model handle is returned that contains all points from the 3D scene that were used
as key points in the matching process. This is helpful for tuning the sampling distance and key point rate
for the matching (see parameter KeyPointFraction of operator find_surface_model). The parameter
ResultIndex is ignored. At least 10 key points should be on the object of interest for stable results.
HALCON 24.11.1.0
146 CHAPTER 3 3D MATCHING
’score_unrefined’: The score of the result before the dense pose refinement is returned. If the sparse pose
refinement was disabled, this is the score of the approximate matching. Otherwise the score of the
sparse pose refinement is returned. See find_surface_model for details about the score. In
ResultIndex the index of the result must be specified. If SurfaceMatchingResultID was created
by refine_surface_model_pose, 0 is returned.
’sampled_3d_edges’: If the surface model was trained with ’train_3d_edges’ enabled, a 3D object model handle
is returned that contains the sampled 3D edge points that were used in the approximate matching step and in
the sparse refinement step. The parameter ResultIndex is ignored.
The following values are always possible for ResultName, regardless the operator
SurfaceMatchingResultID was created with:
’pose’: Returns the pose of the matching or refinement result. In ResultIndex the index of the result must be
specified.
’score_refined’: Returns the score of the result after the dense pose refinement. See find_surface_model
for details about this score. In ResultIndex the index of the result must be specified. If
SurfaceMatchingResultID was created by find_surface_model and dense pose refinement was
disabled, 0 is returned.
’score’: Returns the combined score of the result indexed in ResultIndex, thus this parameter is equal to
Score returned in find_surface_model.
’score_surface’: Returns the surface-based score of the result indexed in ResultIndex. If not specifically set
otherwise, this score is equal to ’score_refined’.
’score_3d_edges’: Returns the 3D edge score of the result indexed in ResultIndex. This score is only appli-
cable for edged-supported surface-based matching.
’score_2d_edges’: Returns the 2D edge score of the result indexed in ResultIndex. This score is only appli-
cable for edged-supported surface-based matching.
’score_view_based’: Returns the view-based score of the result indexed in ResultIndex. This score is only
applicable if the surface model supports view-based score computation.
’all_scores’: Returns for the result indexed in ResultIndex the values of the five scores ’score’,
’score_surface’, ’score_3d_edges’, ’score_2d_edges’, and ’score_view_based’. Thereby the scores have the
same order as the thresholds given through the parameter MinScore in the matching and refinement opera-
tors.
Parameters
. SurfaceMatchingResultID (input_control) . . . . . . . . . . . . . . . . . . . . . surface_matching_result ; handle
Handle of the surface matching result.
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the result property.
Default: ’pose’
List of values: ResultName ∈ {’sampled_scene’, ’key_points’, ’pose’, ’score_unrefined’, ’score_refined’,
’sampled_3d_edges’, ’score’, ’score_surface’, ’score_3d_edges’, ’score_2d_edges’, ’score_view_based’,
’all_scores’}
. ResultIndex (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the matching result, starting with 0.
Default: 0
Suggested values: ResultIndex ∈ {0, 1, 2, 3}
Restriction: ResultIndex >= 0
. ResultValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string / real / handle
Value of the result property.
Result
If the handle of the result is valid, the operator get_surface_matching_result returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information
get_surface_model_param ( : : SurfaceModelID,
GenParamName : GenParamValue )
’diameter’: Diameter of the model point cloud. The diameter is the length of the diagonal of the axis-parallel
bounding box (see parameter ’bounding_box1’).
’center’: Center point of the model. The center point is the center of the axis-parallel bounding box (see parameter
’bounding_box1’).
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z).
’sampled_model’: The 3D points sampled from the model for matching. This returns an ObjectModel3D that
contains all points sampled from the model surface for matching.
’sampled_pose_refinement’: The 3D model points subsampled from the model for the pose refinement. This
returns an ObjectModel3D that contains all points sampled from the model surface for pose refinement.
’3d_edges_trained’: Returns if the surface model was prepared for edge-supported surface-based matching, i.e.,
if the parameter ’train_3d_edges’ was enabled in create_surface_model. The returned value is either
’true’ or ’false’.
’view_based_trained’: Returns if the surface model was prepared to support view-based score com-
putation for surface-based matching, i.e., if the parameter ’train_view_based’ was enabled in
create_surface_model. The returned value is either ’true’ or ’false’.
’camera_parameter’:
’camera_parameter X’: Returns the camera parameters for camera number X, where X is a zero-based index for
the cameras. If not given, X defaults zero (first camera). The camera parameters must previously have been
set by set_surface_model_param.
’camera_pose’:
’camera_pose X’: Returns the camera pose for camera number X, where X is a zero-based index for the cameras.
If not given, X defaults zero (first camera).
’symmetry_axis_direction’:
’symmetry_axis_origin’: Returns the symmetry axis or origin, respectively, as set with
set_surface_model_param. If no axis is set, an empty tuple is returned.
’symmetry_poses’: Returns the symmetry poses as set with set_surface_model_param.
’symmetry_poses_all’: Returns all symmetry poses created by set_surface_model_param based on the
symmetry poses set with set_surface_model_param.
HALCON 24.11.1.0
148 CHAPTER 3 3D MATCHING
Parameters
Possible Predecessors
create_surface_model, read_surface_model
Possible Successors
find_surface_model, refine_surface_model_pose, write_surface_model
See also
create_surface_model, set_surface_model_param
Module
3D Metrology
HALCON 24.11.1.0
150 CHAPTER 3 3D MATCHING
The maximum possible error in the approximate pose that can still be refined depends on the type of object, the
amount of clutter in the scene and the visible parts of the objects. In general, differences in the orientation of up to
15° and differences in the position of up to 10% can be refined.
The accuracy of the pose refinement is limited to around 0.1% of the model’s size due to numerical reasons. The
accuracy further depends on the noise of the scene points, the number of scene points and the shape of the model.
Details about the pose refinement and the parameters are described in the documentation of
find_surface_model in the section about the dense pose refinement step. The following generic parameters
can be set for refine_surface_model_pose, and are also documented in find_surface_model:
’pose_ref_num_steps’, ’pose_ref_sub_sampling’, ’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’,
’pose_ref_scoring_dist_rel’, ’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’,
’3d_edge_min_amplitude_rel’, ’3d_edge_min_amplitude_abs’, ’3d_edges’, ’use_3d_edges’, ’use_view_based’,
’use_self_similar_poses’, ’pose_ref_dist_threshold_edges_rel’, ’pose_ref_dist_threshold_edges_abs’,
’pose_ref_scoring_dist_edges_rel’, and ’pose_ref_scoring_dist_edges_abs’.
Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model containing the scene.
. InitialPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Initial pose of the surface model in the scene.
. MinScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Minimum score of the returned poses.
Default: 0
Restriction: MinScore >= 0
. ReturnResultHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Enable returning a result handle in SurfaceMatchingResultID.
Default: ’false’
List of values: ReturnResultHandle ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’pose_ref_scoring_dist_rel’,
’pose_ref_scoring_dist_abs’, ’pose_ref_use_scene_normals’, ’3d_edge_min_amplitude_rel’,
’3d_edge_min_amplitude_abs’, ’viewpoint’, ’3d_edges’, ’use_3d_edges’, ’use_view_based’,
’use_self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {0, 1, ’true’, ’false’, 0.005, 0.01, 0.03, 0.05, 0.1}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D pose of the surface model in the scene.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Score of the found instances of the model.
. SurfaceMatchingResultID (output_control) . . . . . . . . . . . . . surface_matching_result(-array) ; handle
Handle of the matching result, if enabled in ReturnResultHandle.
Result
refine_surface_model_pose returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d, get_object_model_3d_params,
read_surface_model, create_surface_model, get_surface_model_param,
find_surface_model, edges_object_model_3d
Possible Successors
get_surface_matching_result, clear_surface_matching_result,
clear_object_model_3d
Alternatives
find_surface_model, refine_surface_model_pose_image, find_surface_model_image
See also
create_surface_model, find_surface_model, refine_surface_model_pose_image
Module
3D Metrology
HALCON 24.11.1.0
152 CHAPTER 3 3D MATCHING
serialize_surface_model (
: : SurfaceModelID : SerializedItemHandle )
Serialize a surface_model.
serialize_surface_model serializes the data of a surface model (see fwrite_serialized_item
for an introduction of the basic principle of serialization). The same data that is written in a file by
write_surface_model is converted to a serialized item. The surface model is defined by the handle
SurfaceModelID. The serialized surface model is returned by the handle SerializedItemHandle and
can be deserialized by deserialize_surface_model.
Parameters
Result
If the parameters are valid, the operator serialize_surface_model returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Possible Predecessors
read_surface_model, create_surface_model, get_surface_model_param
Possible Successors
clear_surface_model, fwrite_serialized_item, send_serialized_item,
deserialize_surface_model
See also
create_surface_model, read_surface_model, write_surface_model
Module
3D Metrology
• Defining cameras for image-based refinement. The following parameters allow to set and clear cam-
era parameters and poses. Those are used by the operators find_surface_model_image and
refine_surface_model_pose_image to project the surface model into the passed image.
Note that the camera parameters must be set before the camera pose.
’camera_parameter’:
’camera_parameter X’: Sets the camera parameters for camera number X, where X is a zero-based index for
the cameras. If not given, X defaults zero (first camera). The camera parameters are used by the operators
find_surface_model_image and refine_surface_model_pose_image, which use the
images corresponding to the camera for the 3D pose refinement. Cameras must be added in increasing
order.
’camera_pose’:
’camera_pose X’: Sets the camera pose for camera number X, where X is a zero-based index for the cameras.
If not given, X defaults zero (first camera). The pose defaults to the zero-pose [0,0,0,0,0,0,0] when
adding a new camera with ’camera_parameter’. This usually means that camera and 3D sensor have the
same point of origin.
’clear_cameras’: Removes all previously set cameras from the surface model.
• Defining Object Symmetries. The following parameters can be used to define symmetries of the 3D object
which was used for the creation of the surface model. If the 3D object is symmetric, that information can be
used to speed up the surface-based matching. Note that for surface models created with the ’train_3d_edges’
parameter enabled, no symmetries can be set.
By default, no symmetry is active.
Note that for performance reasons, when changing the symmetry with any of the parameters below, certain
internal data structures of the surface model are re-created, which can take a few seconds.
HALCON 24.11.1.0
154 CHAPTER 3 3D MATCHING
’symmetry_axis_direction’: Set the direction of the symmetry axis of the model. GenParamValue must be
a tuple with three numbers, containing the x-, y- and z-value of the axis direction. The model is modified
to use this symmetry information for speeding up the matching process.
To remove the symmetry information, pass an empty tuple in GenParamValue. Note that either a
symmetry axis or symmetry poses can be set, but not both.
An object with a discontinuous symmetry. The symmetry pose for this object is [0,0,0, 0,0,360.0/5, 0].
• Restrict the pose range. The following parameters can be used to restrict the range of rotations in which
the surface model is searched for by find_surface_model, or the allowed range of rotations for the
refinement with refine_surface_model_pose.
By default, no pose range restriction is active.
Note that for performance reasons, when changing the pose range with any of the parameters below, certain
internal data structures of the surface model are re-created, which can take a few seconds.
’pose_restriction_reference_pose’: Set a reference pose of the model. The reference pose can be used along
with ’pose_restriction_max_angle_diff’, to restrict the allowed range of rotations of the model.
If GenParamValue is an empty tuple, any previously set reference pose is cleared and no pose range
restriction will be active for the model.
Otherwise, GenParamValue must be a pose (see create_pose). Note that the transla-
tion part of the pose is ignored. Also note that both ’pose_restriction_reference_pose’ and
’pose_restriction_max_angle_diff’ must be set in order for the pose restriction to be active.
’pose_restriction_max_angle_diff’: Set by how much the rotation of a pose found with
find_surface_model or refined with refine_surface_model_pose may deviate from the
rotation set with ’pose_restriction_reference_pose’, in radians.
If GenParamValue is an empty tuple, any previously set maximum deviation angle is cleared and no
pose range restriction will be active for the model.
Otherwise, GenParamValue must be an angle, which indicates by how much the rotations of a de-
tected pose ’P’ and the reference pose ’R’ set with ’pose_restriction_reference_pose’ may differ. The
comparison is performed for every model point using the formula 6 (Rv, P v) ≤ max_angle_diff ,
where v is the 3D point vector.
’pose_restriction_allowed_axis_direction’: Set an axis for which rotations are ignored when evaluating
the pose range (see ’pose_restriction_reference_pose’ and ’pose_restriction_max_angle_diff’). If
GenParamValue is an empty tuple, any previously set axis is cleared.
Otherwise, GenParamValue must contain a tuple of three numbers which are the direction of the axis
in model coordinates.
If such an axis is set, then a pose is considered to be within the allowed range if the angle between the axis
in the reference pose and the compared pose is smaller than the allowed angle, using 6 (R axis, P axis) ≤
max_angle_diff .
’pose_restriction_allowed_axis_origin’: Set a point on the allowed rotation axis of the model.
GenParamValue must be a tuple with three numbers, which represent a point in model coordinates
that lies on the symmetry axis of the model. This parameter is optional and defaults to the center of the
model as returned by get_surface_model_param.
’pose_restriction_filter_final_poses_only’: This flag allows to switch between two different modes for the
pose range restriction.
If GenParamValue is ’false’ (default), poses outside the defined pose range are removed early
in the matching process. Use this setting if the object pose in the scene is always within the de-
HALCON 24.11.1.0
156 CHAPTER 3 3D MATCHING
fined rotation range, but the object is sometimes found with incorrect rotations. Note that with
this setting, find_surface_model might return poses that the algorithm considers to be lo-
cally suboptimal, because the locally more optimal poses are outside the allowed pose range. Also
note that with this setting, the pose restriction is observed strictly. When passing an input pose to
refine_surface_model_pose that is outside the allowed pose range, it will be transformed to be
within the allowed pose range.
If GenParamValue is ’true’, only the final poses are filtered before returning them. This allows
removing poses that are valid object poses, but are not needed by the application because, for example,
the object cannot be picked up by the robot in a certain orientation. Note that in this setting, less poses
than requested might be returned by find_surface_model if one or more of the final poses are
outside the allowed pose range.
• Modifying self-similarities. The following parameters can be used to adapt the optimization regarding self-
similar poses, i.e., poses under which the model is almost symmetric. The parameters can only be set if the
parameter ’train_self_similar_poses’ was activated during the call of create_surface_model.
Note that for performance reasons, when changing the self-similarity search with any of the parameters below,
certain internal data structures of the surface model are re-created, which can take a few seconds.
’self_similar_poses’: Set the self-similar poses of the model. Those are poses under which the model is very
similar to itself and which can be confused during search.
find_surface_model will find such poses automatically if the parameter ’use_self_similar_poses’
is activated. The poses can be obtained with get_surface_model_param. If the automatically
determined poses are not sufficient to resolve self-similarities, the self-similar poses can be adapted with
this parameter. It is usually not recommended to modify this parameter.
GenParamValue must contain a list of poses. The identity pose will automatically be added to the list
of poses, if it is not already contained in it.
Attention
Note that in some cases, if this operator encounters an error condition while modifying the surface model, such
as an out-of-memory error, the model might be left in an inconsistent, partly changed state. In such cases, it is
recommended to clear the surface model and to no longer use it.
This does not apply to error codes due to invalid parameters, which are checked before performing any model
modification.
Also note that setting some of the options requires re-generation of internal data structures and can take as long as
the original create_surface_model.
Parameters
. SurfaceModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .surface_model ; handle
Handle of the surface model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the parameter.
Default: ’camera_parameter’
List of values: GenParamName ∈ {’camera_parameter’, ’camera_pose’, ’clear_cameras’,
’symmetry_axis_direction’, ’symmetry_axis_origin’, ’symmetry_poses’, ’pose_restriction_reference_pose’,
’pose_restriction_max_angle_diff’, ’pose_restriction_allowed_axis_direction’,
’pose_restriction_allowed_axis_origin’, ’pose_restriction_filter_final_poses_only’, ’self_similar_poses’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / string / integer
Value of the parameter.
Suggested values: GenParamValue ∈ {’true’, ’false’, [], [0,0,0,0,0,0,0], [0,0,1]}
Result
set_surface_model_param returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised.
Execution Information
Possible Predecessors
read_surface_model, create_surface_model, get_surface_model_param
Possible Successors
clear_surface_model
See also
create_surface_model, read_surface_model
Module
3D Metrology
HALCON 24.11.1.0
158 CHAPTER 3 3D MATCHING
3D Object Model
4.1 Creation
clear_object_model_3d ( : : ObjectModel3D : )
copy_object_model_3d ( : : ObjectModel3D,
Attributes : CopiedObjectModel3D )
159
160 CHAPTER 4 3D OBJECT MODEL
model. The input 3D object model is defined by a handle ObjectModel3D. The operator returns the handle
CopiedObjectModel3D of the new 3D object model. The operator can be used to save memory space by
removing not needed attributes. Access to the attributes of the 3D object model is possible, e.g., with the operator
get_object_model_3d_params.
The parameter Attributes determines which attributes should be copied. In addition, attributes can be ex-
cluded from copying by using the prefix ~. In order to remove attributes from a 3D object model, the operator
remove_object_model_3d_attrib can be used instead.
Note that because a 3D object model itself consists of a set of attributes, even the point coordinates are an attribute
of the model. This means, that at least this one attribute must be selected for copy_object_model_3d else
the object model to be copied would be empty. So if only a 3D object model representing a point cloud shall be
copied without further attributes, Attributes must be set to ’point_coord’. If an attribute to be copied is not
available or no attribute is selected, an exception is raised.
The following values for the parameter Attributes are possible:
’point_coord’: This value specifies that the attribute with the 3D point coordinates is copied.
’point_normal’: This value specifies that the attribute with the 3D point normals and the attribute with the 3D
point coordinates are copied.
’triangles’: This value specifies that the attribute with the face triangles and attribute with the 3D point coordinates
are copied.
’polygons’: This value specifies that the attribute with the face polygons and the attribute with the 3D point coor-
dinates are copied.
’lines’: This value specifies that the attribute with the lines and the attribute with the 3D point coordinates are
copied.
’xyz_mapping’: This value specifies that the attribute with the mapping to image coordinates and the attribute with
the 3D point coordinates are copied.
’extended_attribute’: This value specifies that all extended attributes are copied. If it is necessary to copy further
attributes that are related to the extended attributes, these attributes are copied, too. These further attributes
could be, e.g., 3D point coordinates, face triangles, face polygons, or lines.
’primitives_all’: This value specifies that the attribute with the parameters of the primitive (including an empty
primitive) is copied (e.g., obtained from the operator fit_primitives_object_model_3d).
’primitive_plane’: This value specifies that the attribute with the primitive plane is copied (e.g., obtained from the
operator fit_primitives_object_model_3d).
’primitive_sphere’: This value specifies that the attribute with the primitive sphere is copied (e.g., obtained from
the operator fit_primitives_object_model_3d).
’primitive_cylinder’: This value specifies that the attribute with the primitive cylinder is copied (e.g., obtained
from the operator fit_primitives_object_model_3d).
’primitive_box’: This value specifies that the attribute with the primitive cylinder is copied.
’shape_based_matching_3d_data’: This value specifies that the attribute with the prepared shape model for shape-
based 3D matching is copied.
’distance_computation_data’: This value specifies that the attribute with the distance computation data structure
is copied. The distance computation data can be created with prepare_object_model_3d, and can
be used with distance_object_model_3d. If this attribute is selected, then the corresponding target
data attribute of the distance computation is copied as well. For example, if the distance computation was
prepared for triangles, the triangles and the vertices are copied.
’surface_based_matching_data’: This value specifies that the data for surface based matching are copied. The
attributes with the 3D point coordinates and the attribute with the point normals are copied. If the attribute
with point normals is not available, the attribute with the mapping from the 3D point coordinates to the
image coordinates is copied. If the attribute with the mapping from the 3D point coordinates to the image
coordinates is not available, the attribute with the face triangles is copied. If the attribute with face triangles
is not available, too, the attribute with the face polygons is copied. If none of these attributes is available, an
exception is raised.
’segmentation_data’: This value specifies that the data for a 3D segmentation is copied. The attributes with the 3D
point coordinates and the attribute with the face triangles are copied. If the attribute with the face triangles
is not available, the attribute with the mapping from the 3D point coordinates to the image coordinates is
copied. If none of these attributes is available, an exception is raised.
’score’: This value specifies that the attribute with the scores and the attribute with the 3D point coordinates are
copied. Scores may be obtained from the operator reconstruct_surface_stereo.
’red’: This value specifies that the attribute containing the red color and the attribute with the 3D point coordinates
are copied.
’green’: This value specifies that the attribute containing the green color and the attribute with the 3D point coor-
dinates are copied.
’blue’: This value specifies that the attribute containing the blue color and the attribute with the 3D point coordi-
nates are copied.
’original_point_indices’: This value specifies that the attribute with the original point indices and the attribute
with the 3D point coordinates are copied. Original point indices may be obtained from the operator
triangulate_object_model_3d.
’all’: This value specifies that all available attributes are copied. That is, the attributes are the point coordinates,
the point normals, the face triangles, the face polygons, the mapping to image coordinates, the shape model
for matching, the parameter of a primitive, and the extended attributes.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the input 3D object model.
. Attributes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Attributes to be copied.
Default: ’all’
List of values: Attributes ∈ {’point_coord’, ’point_normal’, ’triangles’, ’polygons’, ’xyz_mapping’,
’extended_attribute’, ’shape_based_matching_3d_data’, ’primitives_all’, ’primitive_plane’,
’primitive_sphere’, ’primitive_cylinder’, ’primitive_box’, ’surface_based_matching_data’,
’segmentation_data’, ’distance_computation_data’, ’score’, ’red’, ’green’, ’blue’, ’all’,
’original_point_indices’}
. CopiedObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the copied 3D object model.
Result
copy_object_model_3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an ex-
ception is raised.
Execution Information
deserialize_object_model_3d (
: : SerializedItemHandle : ObjectModel3D )
HALCON 24.11.1.0
162 CHAPTER 4 3D OBJECT MODEL
principle of serialization). The serialized 3D object model is defined by the handle SerializedItemHandle.
The deserialized values are stored in an automatically created 3D object model with the handle ObjectModel3D.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
Result
If the parameters are valid, the operator deserialize_object_model_3d returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
Result
gen_box_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised.
Execution Information
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
smallest_bounding_box_object_model_3d
Possible Successors
get_object_model_3d_params, sample_object_model_3d, clear_object_model_3d
See also
gen_cylinder_object_model_3d, gen_sphere_object_model_3d,
gen_sphere_object_model_3d_center, gen_plane_object_model_3d
Module
3D Metrology
HALCON 24.11.1.0
164 CHAPTER 4 3D OBJECT MODEL
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
get_object_model_3d_params, sample_object_model_3d, clear_object_model_3d
See also
gen_sphere_object_model_3d, gen_sphere_object_model_3d_center,
gen_plane_object_model_3d, gen_box_object_model_3d
Module
3D Metrology
gen_empty_object_model_3d ( : : : EmptyObjectModel3D )
gen_object_model_3d_from_points ( : : X, Y, Z : ObjectModel3D )
Create a 3D object model that represents a point cloud from a set of 3D points.
gen_object_model_3d_from_points creates a 3D object model that represents a point cloud. The points
are described by x-, y-, and z-coordinates in the parameters X, Y, and Z.
Parameters
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.x(-array) ; real
The x-coordinates of the points in the 3D point cloud.
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point3d.y(-array) ; real
The y-coordinates of the points in the 3D point cloud.
HALCON 24.11.1.0
166 CHAPTER 4 3D OBJECT MODEL
gen_sphere_object_model_3d_center ( : : X, Y, Z,
Radius : ObjectModel3D )
HALCON 24.11.1.0
168 CHAPTER 4 3D OBJECT MODEL
’om3’: HALCON format for 3D object model. Files with this format can be written by
write_object_model_3d. The default file extension for this format is ’om3’.
’dxf’: AUTOCAD format. HALCON supports only the ASCII version of the format. See below for details about
reading this file format. The default file extension for this format is ’dxf’.
’off’: Object File Format. This is a simple ASCII-based format that can hold 3D points and polygons. The binary
OFF format is not supported. The default file extension for this format is ’off’.
’ply’: Polygon File Format (also Stanford Triangle Format). This is a simple format that can hold 3D points,
point normals, polygons, color information and point-based extended attributes. HALCON supports both
the ASCII and the binary version of the format. If the file to be read contains unsupported information, the
additional data is ignored and only the supported data is read. If the name of a property entry of a ’ply’
file coincides with the name of a standard attribute (see set_object_model_3d_attrib), the property
will preferably be read into the standard attribute. The default file extension for this format is ’ply’.
’obj’: OBJ file format, also ’Wavefront OBJ-Format’. This is an ASCII-based format that can hold 3D points,
polygons, normals, texture coordinates, materials and other information. HALCON supports points (’v’-
lines), point normals (’vn’-lines) and polygonal faces (’f’-lines). Existing point normals are only returned
if there are exactly as many point normals as there are points. Other entities are ignored. The default file
extension for this format is ’obj’.
’stl’,
’stl_binary’,
’stl_ascii’: STL file format, also ’Stereolithography format’, ’SurfaceTesselationLanguage ’, ’StandardTriangula-
tionLanguage’, and ’StandardTesselationLanguage’. This format stores triangles and triangle normals. How-
ever, as triangle normals are not supported by HALCON 3D object models, only triangles are read while the
triangle normals are ignored. Normals are recomputed from the triangles if required. HALCON reads both
the ASCII and the binary version of this format. If ’stl’ is set, HALCON will auto-detect the type. Setting
the type to ’stl_binary’ or ’stl_ascii’ will enforce the corresponding format. The default file extension for this
format is ’stl’.
’step’: STEP file format, also STP or ’Standard for the Exchange of Product Model Data’. This is a complex
format that stores a large variety of geometrical definitions which allows an accurate storage of 3D models.
Due to the limited support for the geometrical structures defined by STEP in HALCON 3D object models,
triangulation is performed on these geometries, resulting in models comprised of triangle meshes. The default
file extensions for this format are ’step’ and ’stp’.
’generic_ascii’: This format can be used to read different ASCII files containing 3D data in tabular form, e.g.
’ptx’, ’pts’, ’xyz’ or ’pcd’. Currently, only point based attributes are supported, no triangles or polygons. The
information for each 3D point is expected to be written in a single line, one point at a time. The file format
must be further specified by setting the generic parameter ’ascii_format’.
When reading a DXF file, the output parameter Status contains information about the number of 3D faces that
were read and, if necessary, warnings that parts of the DXF file could not be interpreted.
The parameter Scale defines the scale of the file. For example, if the parameter is set to ’mm’, all units in the file
are assumed to have the unit ’mm’ and are transformed into the usual HALCON-internal unit ’m’ by multiplication
with 0.001. A value of ’100 mm’ thus becomes ’0.1 m’. Alternatively, a scaling factor can be passed to Scale,
which is multiplied with all coordinate values found in the file. The relation of units to scaling factors is given in
the following table:
Note that the parameter Scale is ignored for files of type ’om3’ and ’step’. om3-files are always read without
any scale changes. For step-files, the unit is directly defined in the files, read along with the stored data and used
to scale to the HALCON-internal unit ’m’. For changing the scale manually after reading a 3D object model, use
affine_trans_object_model_3d.
A set of additional optional parameters can be set. The names and values of the parameters are passed in
GenParamName and GenParamValue, respectively. Some of the optional parameters can only be set for a
certain file type. The following values for GenParamName are possible:
’file_type’: Forces a file type. If this parameter is not set, the operator read_object_model_3d tries to auto-
detect the file type using the file ending and the file header. If the parameter is set, the given file is interpreted
as this file format.
List of values: ’om3’, ’dxf’, ’off’, ’ply’, ’obj’, ’stl’, ’step’, ’generic_ascii’.
’convert_to_triangles’: Convert all faces to triangles. If this parameter is set to ’true’, all faces read from the file
are converted to triangles.
Valid for formats: ’dxf’, ’ply’, ’off’, ’obj’.
List of values: ’true’, ’false’.
Default: ’false’.
’invert_normals’: Invert normals and face orientations. If this parameter is set to ’true’, the orientation of all
normals and faces is inverted.
Valid for formats: ’dxf’, ’ply’, ’off’, ’obj’, ’stl’, ’step’, ’generic_ascii’.
List of values: ’true’, ’false’.
Default: ’false’.
’max_approx_error’, ’min_num_points’: DXF-specific parameters (see below).
Valid for formats: ’dxf’.
’max_surface_deviation’: STEP-specific parameter.
Specifies the maximum allowed deviation (in ’m’) from the model surface during the triangulation. A smaller
value will generate a more accurate model but will also increase the reading time and the number of points and
triangles in the resulting model. Set the parameter to ’auto’ in order to estimate it automatically depending
on the size of the model.
Valid for formats: ’step’.
Suggested values: ’auto’, 0.0001, 0.00001.
Default: ’auto’.
Restriction: ’max_surface_deviation’ > 0
’split_level’: STEP-specific parameter.
STEP files can contain definitions of independent model components. With this parameter, each component
can be imported as a HALCON 3D object model. If the parameter is set to 0, the file is imported as a single
model. With 1 the model components are roughly separated from each other, while 2 separates the model
components at a more detailed level.
Valid for formats: ’step’.
List of values: 0, 1, 2.
Default: 0.
’ascii_format’: generic_ascii-specific parameter.
Specifies the format of the ASCII file to be read. As value, a dict containing information about the file content
must be provided. The dict defines the columns to be read and meta-data like the first line number containing
point information. Examples are given at the bottom of the operator reference or in the HDevelop example
read_object_model_3d_generic_ascii.hdev. The following parameters can be set as dict keys:
’columns’: (mandatory) Defines the column attributes in the read file, given as a tuple of
strings. All point-related standard and extended attributes as listed in the reference of
set_object_model_3d_attrib are supported. At least, ’point_coord_x’, ’point_coord_y’
and ’point_coord_z’ must be set. When setting normals, all three components ’point_normal_x’,
’point_normal_y’ and ’point_normal_z’ must be set. Ignoring columns is possible by setting ” at the
according tuple position.
Suggested values: [’point_coord_x’, ’point_coord_y’, ’point_coord_z’], [’point_normal_x’,
’point_normal_y’, ’point_normal_z’], ’red’, ’green’, ’blue’, ’&my_custom_attrib’, ”.
HALCON 24.11.1.0
170 CHAPTER 4 3D OBJECT MODEL
’separator’: (mandatory) Defines the separator between the columns. Currently, whitespace (blanks or
tabs) and semicolon are supported.
List of values: ’ ’, ’;’.
’first_point_line’: (optional) Describes the number of the first line to be read from the file and can e.g. be
used to skip header information. The top line in the file corresponds to ’first_point_line’ 1.
Default: 1.
Restriction: ’first_point_line’ > 0
’last_point_line’: (optional) Describes the number of the last line to be read from the file and can e.g. be
used to skip unsupported information. The top line in the file corresponds to ’last_point_line’ 1. When
’last_point_line’ is set to -1, all lines are read.
Default: -1.
Restriction: ’last_point_line’ >= -1
’comment’: (optional) Describes the start of comments in the read file. Information behind the comment
start are ignored when reading the file.
Suggested values: ’#’, ’*’, ’/’, ’comment’.
Valid for formats: ’generic_ascii’.
’xyz_map_width’: Creates for the read 3D object model a mapping that assigns an image coordinate to each read
3D point, as in xyz_to_object_model_3d. It is assumed that the read file contains the 3D points row-
wise. The passed value is used as width of the image. The height of the image is computed automatically.
If this parameter is set, the read 3D object model can be projected by object_model_3d_to_xyz using
the method ’from_xyz_map’. Only one of the two parameters ’xyz_map_width’ and ’xyz_map_height’ can be
set.
Valid for formats: ’ply’, ’off’, ’obj’, ’generic_ascii’.
Default: -1.
Restriction: ’xyz_map_width’ > 0
’xyz_map_height’: As ’xyz_map_width’, but assuming that the 3D points are aligned column-wise. The
width of the image is computed automatically. Only one of the two parameters ’xyz_map_width’ and
’xyz_map_height’ can be set.
Valid for formats: ’ply’, ’off’, ’obj’, ’generic_ascii’.
Default: -1.
Restriction: ’xyz_map_height’ > 0
Note that in many cases, it is recommended to use the 2D mapping data, if available, for speed
and robustness reasons. This is beneficial especially when using sample_object_model_3d,
surface_normals_object_model_3d, or when preparing a 3D object model for surface-based matching,
e.g., smoothing, removing outliers, and reducing the domain.
The operator read_object_model_3d supports the following DXF entities:
• POLYLINE
– Polyface meshes (Polyline flag 64)
– 3D Polylines (Polyline flag 8,9)
– 2D Polylines (Polyline flag 0)
• LWPOLYLINE
– 2D Polylines
• 3DFACE
• LINE
• CIRCLE
• ARC
• SOLID
• BLOCK
• INSERT
The two-dimensional linear DXF entities LINE, CIRCLE and ARC are not interpreted as faces. Only if these
elements are extruded, the resulting faces are inserted in the 3D object model. All elements that represent no faces
but lines are added as 3D lines to the 3D object model.
The curved surface of extruded DXF entities of the type CIRCLE and ARC is approximated by planar faces.
The accuracy of this approximation can be controlled with the two generic parameters ’min_num_points’ and
’max_approx_error’. The parameter ’min_num_points’ defines the minimum number of sampling points that are
used for the approximation of the DXF element CIRCLE or ARC. Note that the parameter ’min_num_points’
always refers to the full circle, even for ARCs, i.e., if ’min_num_points’ is set to 50 and a DXF entity of the
type ARC is read that represents a semi-circle, this semi-circle is approximated by at least 25 sampling points.
The parameter ’max_approx_error’ defines the maximum deviation of the XLD contour from the ideal circle. The
determination of this deviation is carried out in the units used in the DXF file. For the determination of the accuracy
of the approximation both criteria are evaluated. Then, the criterion that leads to the more accurate approximation
is used.
Internally, the following default values are used for the generic parameters:
• ’min_num_points’ = 20
• ’max_approx_error’ = 0.25
To achieve a more accurate approximation, either the value for ’min_num_points’ must be increased or the value
for ’max_approx_error’ must be decreased.
One possible way to create a suitable DXF file is to create a 3D model of the object with the CAD program
AutoCAD. Ensure that the surface of the object is modeled, not only its edges. Lines that, e.g., define object
edges, will not be used by HALCON, because they do not define the surface of the object. Once the modeling is
completed, you can store the model in DWG format. To convert the DWG file into a DXF file that is suitable for
HALCON’s 3D matching, carry out the following steps:
• Export the 3D CAD model to a 3DS file using the 3dsout command of AutoCAD. This will triangulate the
object’s surface, i.e., the model will only consist of planes. (Users of AutoCAD 2007 or newer versions can
download this command utility from Autodesk’s web site.)
• Open a new empty sheet in AutoCAD.
• Import the 3DS file into this empty sheet with the 3dsin command of AutoCAD.
• Save the object into a DXF R12 file.
Users of other CAD programs should ensure that the surface of the 3D model is triangulated before it is exported
into the DXF file. If the CAD program is not able to carry out the triangulation, it is often possible to save the 3D
model in the proprietary format of the CAD program and to convert it into a suitable DXF file by using a CAD file
format converter that is able to perform the triangulation.
Parameters
HALCON 24.11.1.0
172 CHAPTER 4 3D OBJECT MODEL
* Example how to use file_type generic_ascii and generic parameter ascii_format to read
FileFormat := dict{}
FileFormat.separator := ' '
FileFormat.columns := ['point_coord_x', 'point_coord_y', 'point_coord_z', 'point_normal_
FileFormat.first_point_line := 14
FileFormat.last_point_line := 2273
FileFormat.comment := 'comment'
read_object_model_3d ('glass_mug.ply', 'm', ['file_type', 'ascii_format'], ['generic_asc
Result
The operator read_object_model_3d returns the value 2 (H_MSG_TRUE) if the given parameters are correct,
the file can be read, and the file is valid. If the file format is unknown or cannot be determined, the error 9512 is
raised. If the file is invalid, the error 9510 is raised. If necessary, an exception will be raised.
Execution Information
remove_object_model_3d_attrib ( : : ObjectModel3D,
Attributes : ObjectModel3DOut )
Standard attributes
The following values for the parameter Attributes are possible:
’point_normal’: This value specifies that the attribute with the 3D point normals and the attribute with the 3D
point coordinates are removed.
’triangles’: This value specifies that the attribute with the face triangles is removed.
’polygons’: This value specifies that the attribute with the face polygon is removed.
’lines’: This value specifies that the attribute with the lines is removed.
’xyz_mapping’: This value specifies that the attribute with the mapping to image coordinates is removed.
’extended_attribute’: This value specifies that all user-defined extended attributes are removed.
’primitives_all’: This value specifies that the attribute with the parameters of the primitive (including an empty
primitive) is removed (e.g., obtained from the operator fit_primitives_object_model_3d).
’primitive_plane’: This value specifies that the attribute with the primitive plane is removed (e.g., obtained from
the operator fit_primitives_object_model_3d).
’primitive_sphere’: This value specifies that the attribute with the primitive sphere is removed (e.g., obtained from
the operator fit_primitives_object_model_3d).
’primitive_cylinder’: This value specifies that the attribute with the primitive cylinder is removed (e.g., obtained
from the operator fit_primitives_object_model_3d).
’primitive_box’: This value specifies that the attribute with the primitive cylinder is removed.
’shape_based_matching_3d_data’: This value specifies that the attribute with the prepared shape model for shape-
based 3D matching is removed.
’distance_computation_data’: This value specifies that the attribute with the distance computation data structure
is removed. The distance computation data can be created with prepare_object_model_3d, and can
be used with distance_object_model_3d.
’all’: This value specifies that all available attributes are removed except for the point coordinates. That is, the
attributes are the point normals, the face triangles, the face polygons, the mapping to image coordinates, the
shape model for matching, the parameter of a primitive, and the extended attributes.
Extended attributes
Extended attributes are attributes, that can be derived from standard attributes by special operators (e.g.,
distance_object_model_3d), or user-defined attributes (set with set_object_model_3d_attrib
or set_object_model_3d_attrib_mod). The extended attributes can be removed by setting their names
in Attributes.
The following predefined extended attributes can be removed:
’original_point_indices’: This value specifies that the attribute with the original point indices is removed. Original
point indices may be obtained from the operator triangulate_object_model_3d.
’score’: This value specifies that the attribute with the scores is removed. Scores may be obtained from the operator
reconstruct_surface_stereo.
’red’: This value specifies that the attribute containing the red color is removed.
’green’: This value specifies that the attribute containing the green color is removed.
’blue’: This value specifies that the attribute containing the blue color is removed.
’edge_dir_x’: This value specifies that the vector for the X axis is removed.
’edge_dir_y’: This value specifies that the vector for the Y axis is removed.
’edge_dir_z’: This value specifies that the vector for the Z axis is removed.
’edge_amplitude’: This value specifies that the vector for the amplitude is removed.
HALCON 24.11.1.0
174 CHAPTER 4 3D OBJECT MODEL
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the input 3D object model.
. Attributes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the attributes to be removed.
Default: ’extended_attribute’
List of values: Attributes ∈ {’point_normal’, ’triangles’, ’lines’, ’polygons’, ’xyz_mapping’,
’shape_based_matching_3d_data’, ’distance_computation_data’, ’primitives_all’, ’primitive_plane’,
’primitive_sphere’, ’primitive_cylinder’, ’primitive_box’, ’extended_attribute’, ’score’, ’red’, ’green’, ’blue’,
’original_point_indices’, ’all’}
. ObjectModel3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Result
If the parameters are valid, the operator remove_object_model_3d_attrib returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
Possible Predecessors
set_object_model_3d_attrib
Possible Successors
get_object_model_3d_params
Alternatives
remove_object_model_3d_attrib_mod
See also
copy_object_model_3d, set_object_model_3d_attrib
Module
3D Metrology
remove_object_model_3d_attrib_mod ( : : ObjectModel3D,
Attributes : )
Possible Predecessors
set_object_model_3d_attrib_mod
Possible Successors
get_object_model_3d_params
Alternatives
remove_object_model_3d_attrib
See also
copy_object_model_3d, set_object_model_3d_attrib_mod
Module
3D Metrology
serialize_object_model_3d (
: : ObjectModel3D : SerializedItemHandle )
HALCON 24.11.1.0
176 CHAPTER 4 3D OBJECT MODEL
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
read_object_model_3d, fwrite_serialized_item, send_serialized_item,
deserialize_object_model_3d
See also
read_object_model_3d
Module
3D Metrology
’point_coord_x’: The x-coordinates of the 3D points are set with AttribValues. If the attribute does not exist,
the x-, y- and z-coordinates must be set with ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’ at once.
The number of x-, y-, and z-coordinates must be identical.
’point_coord_y’: The y-coordinates of the 3D points are set with AttribValues. If the attribute does not exist,
the x-, y- and z-coordinates must be set with ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’ at once.
The number of x-, y-, and z-coordinates must be identical.
’point_coord_z’: The z-coordinates of the 3D points are set with AttribValues. If the attribute does not exist,
the x-, y- and z-coordinates must be set with ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’ at once.
The number of x-, y-, and z-coordinates must be identical.
’point_normal_x’: The x-components of the 3D point normals of the 3D points are set with AttribValues.
If the attribute does not exist, the x-, y- and z-components of 3D point normals must be set with
’point_normal_x’, ’point_normal_y’, and ’point_normal_z’ at once. The number of x-, y-, and z-components
must be identical to the number of 3D points. Note that the given 3D point normals will not be normalized to
a length of 1.
’point_normal_y’: The y-components of the 3D point normals of the 3D points are set with AttribValues.
If the attribute does not exist, the x-, y- and z-components of 3D point normals must be set with
’point_normal_x’, ’point_normal_y’, and ’point_normal_z’ at once. The number of x-, y-, and z-components
must be identical to the number of 3D points. Note that the given 3D point normals will not be normalized to
a length of 1.
’point_normal_z’: The z-components of the 3D point normals of the 3D points are set with AttribValues.
If the attribute does not exist, the x-, y- and z-components of 3D point normals must be set with
’point_normal_x’, ’point_normal_y’, and ’point_normal_z’ at once. The number of x-, y-, and z-components
must be identical to the number of 3D points. Note that the given 3D point normals will not be normalized to
a length of 1.
’triangles’: The indices of the 3D points that represent triangles are set with AttribValues in the following
order: The first three values of AttribValues (input values 0,1,2) represent the first triangle and contain
the indices of the corresponding 3D points of the triangle corners. The second three values (input values
3,4,5) represent the second triangle etc. The direction of the triangles results from the order of the point
indices.
’polygons’: The indices of the 3D points that represent polygons are set with AttribValues in the following
order: The first value of AttribValues contains the number n of points of the first polygon. The following
values (input values 1,2,..,n) contains the indices of the points of the first polygon. The next value (input
value n+1) contains the number m of the points of the second polygon. The following m values (input values
n+2,n+3,..,n+1+m) contain the indices of the points of the second polygon etc.
’lines’: The indices of the 3D points that represent polylines are set with AttribValues in the following order:
The first value of AttribValues contains the number n of points of the first polyline. The following
values (input values 1,2,..,n) represent the indices of the points of the first polyline. The next value (input
value n+1) contains the number m of points of the second polyline. The following m values (input values
n+2,n+3,..,n+1+m) represent the indices of the points of the second polyline etc. All indices correspond to
already existing 3D points.
’xyz_mapping’: The mapping of 3D points to image coordinates is set with AttribValues in the following
order: The first two values of AttribValues (input value 0 and 1) contain the width and height of the
respective image. The following n values (input values 2,3,..,n+1, with n being the number of 3D points)
represent the row coordinates of the n points given in image coordinates. The next n input values (input
values n+2,n+3,..,n*2+1) represent the column coordinates of the n points in image coordinates. Hence, the
total number of input values is n*2+2.
Extended attributes
Extended attributes are attributes, that can be derived from standard attributes by special operators (e.g.,
distance_object_model_3d), or user-defined attributes. Predefined extended attributes can only be set
separately, for these attributes AttachExtAttribTo will be ignored. The names of user-defined extended
attributes are arbitrary, but must start with the prefix ’&’, e.g., ’&my_attrib’. Extended attributes can have an
arbitrary number of floating point values.
The following predefined extended attributes can be set:
’original_point_indices’: The original points indices of the 3D points are set with AttribValues. The number
of the original points indices must be identical to the number of 3D points.
’score’: The score of a 3D reconstruction of the 3D points are set with AttribValues. Since the score is
evaluated separately for each 3D point, the number of the score-components must be identical to the number
of 3D points.
’red’: The red channel intensities of the 3D points are set with AttribValues. The number of color values
must be identical to the number of 3D points.
’green’: The green channel intensities of the 3D points are set with AttribValues. The number of color values
must be identical to the number of 3D points.
’blue’: The blue channel intensities of the 3D points are set with AttribValues. The number of color values
must be identical to the number of 3D points.
’edge_dir_x’: The x-component of a vector that is perpendicular to the edge direction and the viewing direction.
’edge_dir_y’: The y-component of a vector that is perpendicular to the edge direction and the viewing direction.
’edge_dir_z’: The z-component of a vector that is perpendicular to the edge direction and the viewing direction.
’edge_amplitude’: Contains the amplitude of edge points.
Extended attributes can be attached to already existing standard attributes of the 3D object model by setting the
parameter AttachExtAttribTo. The following values of AttachExtAttribTo are possible:
’object’ or []: If this value is set, the extended attribute specified in AttribName is associated to the 3D object
model as a whole. The number of values specified in AttribValues is not restricted.
’points’: If this value is set, the extended attribute specified in AttribName is associated to the 3D points of
the object model. The number of values specified in AttribValues must be the same as the number of
already existing 3D points.
HALCON 24.11.1.0
178 CHAPTER 4 3D OBJECT MODEL
’triangles’: If this value is set, the extended attribute specified in AttribName is associated to the triangles of
the object model. The number of values specified in AttribValues must be the same as the number of
already existing triangles.
’polygons’: If this value is set, the extended attribute specified in AttribName is associated to the polygons of
the object model. The number of values specified in AttribValues must be the same as the number of
already existing polygons.
’lines’: If this value is set, the extended attribute specified in AttribName is associated to the lines of the object
model. The number of values specified in AttribValues must be the same as the number of already
existing lines.
Attention
If multiple attributes are given in AttribName, AttribValues is divided into sub-tuples of equal length.
Each sub-tuple is then assigned to one attribute. E.g., if AttribName and AttribValues are set to
AttribName := [’&attrib1’,’&attrib2’,’&attrib3’],
AttribValues := [0.0,1.0,2.0,3.0,4.0,5.0],
the following values are assigned to the individual attributes:
’&attrib1’ = [0.0,1.0], ’&attrib2’ = [2.0,3.0], ’&attrib3’ = [4.0,5.0].
Consequently, it is not possible to set multiple attributes of different lengths in one call.
set_object_model_3d_attrib stores the input AttribValues unmodified in the 3D object model.
Therefore, special attention must be paid to the consistency of the input data, as most of the operators expect
consistent 3D object models.
Parameters
See also
copy_object_model_3d, remove_object_model_3d_attrib
Module
3D Metrology
HALCON 24.11.1.0
180 CHAPTER 4 3D OBJECT MODEL
union_object_model_3d ( : : ObjectModels3D,
Method : UnionObjectModel3D )
Result
union_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If there is no attribute
common in all input objects, an exception is raised.
Execution Information
Possible Predecessors
get_object_model_3d_params
Possible Successors
connection_object_model_3d, convex_hull_object_model_3d
See also
gen_box_object_model_3d, gen_sphere_object_model_3d,
gen_cylinder_object_model_3d
Module
3D Metrology
’om3’: HALCON format for object model 3D. Files with this format can be read by read_object_model_3d.
The default file extension for this format is ’om3’.
’dxf’: AUTOCAD format. See read_object_model_3d for details about reading this file format. The default
file extension for this format is ’dxf’.
’off’: Object File Format. This is an ASCII-based format that can hold 3D points and polygons. The default file
extension for this format is ’off’.
’ply’,
’ply_binary’: Polygon File Format (also Stanford Triangle Format). This is a simple format that can hold 3D
points, point normals, polygons, color information and point-based extended attributes. HALCON supports
the writing of both the ASCII and the binary version of this format. The default file extension for this format
is ’ply’.
’obj’: OBJ file format, also Wavefront OBJ-Format. This is an ASCII-based format that can hold 3D points,
polygons, normals, and triangles, which are stored as polygons. The default file extension for this format is
’obj’.
’stl’,
’stl_binary’,
’stl_ascii’: STL file format, also ’Stereolithography format’, ’SurfaceTesselationLanguage ’, ’StandardTriangula-
tionLanguage’, and ’StandardTesselationLanguage’. This format stores triangles and triangle normals. How-
ever, as triangle normals are not supported by HALCON 3D object models and point normals (which are, for
example, calculated by surface_normals_object_model_3d) are not supported by the STL format,
no normals are written to file. If the 3D object model contains polygons, they are converted to triangles before
before writing them to disc. If the file type is set to ’stl’ or ’stl_binary’, the binary version of STL is written
while ’stl_ascii’ selects the ASCII version. The default file extension for this format is ’stl’.
HALCON 24.11.1.0
182 CHAPTER 4 3D OBJECT MODEL
A set of additional optional parameters can be set. The names and values of the parameters are passed in
GenParamName and GenParamValue, respectively. Some of the optional parameters can only be set for a
certain file type. The following values for GenParamName are possible:
’invert_normals’: Invert normals and face orientation before saving the 3D object model. If this value is set to
’true’, for the formats ’off’, ’ply’, ’obj’, and ’stl’ the orientation of faces (triangles and polygons) is inverted
. For formats that support point normals (’ply’,’obj’), all normals are inverted before writing them to disc.
Note that for the types ’om3’ and ’dxf’ the parameter has no effect.
Valid for formats: ’off’, ’ply’, ’obj’, ’stl’. List of values: ’true’, ’false’.
Default: ’false’.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. FileType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the file that is written.
Default: ’om3’
List of values: FileType ∈ {’off’, ’ply’, ’ply_binary’, ’dxf’, ’om3’, ’obj’, ’stl’, ’stl_binary’, ’stl_ascii’}
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the file that is written.
File extension: .off, .ply, .dxf, .om3, .obj, .stl
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’invert_normals’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
Result
The operator write_object_model_3d returns the value 2 (H_MSG_TRUE) if the given parameters are cor-
rect and the file can be written. If necessary, an exception will be raised.
Execution Information
4.2 Features
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Area (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Calculated area.
Number of elements: Area == ObjectModel3D
Example
Result
area_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information
Compute the distances of the points of one 3D object model to another 3D object model.
The operator distance_object_model_3d computes the distances of the points in the 3D object
model ObjectModel3DFrom to the points, triangles, polygons, or primitive in the 3D object model
ObjectModel3DTo. The distances are stored as an extended attribute named ’&distance’ in the 3D object model
ObjectModel3DFrom. This attribute can subsequently be queried with get_object_model_3d_params
or be processed with select_points_object_model_3d or other operators that use extended attributes.
The target data (points, triangles, polygons, or primitive) is selected based on the attributes contained in
ObjectModel3DTo. It is selected based on the presence of the data in the following precedence: Primitive,
triangles, polygons, and points. As alternative to this automatic target data selection, the target data type can also
be set with the generic parameter ’distance_to’ (see below). Generic, non-triangular polygons are internally trian-
gulated by the operator before the distance to the resulting triangles is calculated. Thus, calling the operator with
triangulated objects is faster than calling it with objects having different polygon faces.
MaxDistance can be used to limit the range of the distance values to be computed. If MaxDistance is set
to 0, all distances are computed. If MaxDistance is set to another value, points whose distance would exceed
MaxDistance are not processed and set to MaxDistance. Thus, setting MaxDistance to a value different
than 0 can significantly speed up the execution of this operator.
If Pose is a non-empty tuple, it must contain a pose which is applied to the points in ObjectModel3DFrom
before computing the distances. The pose can be inverted using the generic parameter ’invert_pose’ (see below).
HALCON 24.11.1.0
184 CHAPTER 4 3D OBJECT MODEL
Depending on the target data type (points, triangles, or primitive), several methods for computing the dis-
tances are available. Some of these methods compute a data structure on the elements of ObjectModel3DTo
to speed up the distance computation. Those data structures can be precomputed using the operator
prepare_object_model_3d. This allows multiple calls to distance_object_model_3d to re-use the
data structure, thus saving the time to re-compute it for each call. For objects with non-triangular polygon faces,
the operator prepare_object_model_3d can additionally perform the triangulation and save it to the object
to further speed-up the distance_object_model_3d operator. This triangulation is only performed when
the generic parameter ’distance_to’ is set to ’triangles’. Note that this triangulation, contrary to that of the operator
triangulate_object_model_3d, does not clear out the polygons attribute.
When computing the distance to points or to triangles, the operator can optionally return the index of the closest
point or triangle for each point in ObjectModel3DFrom by setting the generic parameter ’store_closest_index’
to ’true’ (see below). The index is stored as extended attribute named ’&closest_index’ in the 3D object model
ObjectModel3DFrom. Note that the closest index can not be computed when using the ’voxel’ method. If a
point’s distance to its closest element exceeds the maximum distance set in MaxDistance, the closest index is
set to -1.
Optionally, signed distances to points, triangles or to a primitive can be calculated. Therefore, the generic parameter
’signed_distances’ has to be set to ’true’. Note that signed distances can not be computed when using the ’voxel’
method in combination with point to point distances.
In the following, the different target types and methods are explained, and their advantages and disadvantages are
described. Note that the operator automatically selects a default method depending on the target data type. This
method can be overridden using the generic parameter ’method’.
Distance to points: The following methods are available to compute the distances from points to points:
Linear search: For each point in ObjectModel3DFrom, the distances to all points in
ObjectModel3DTo are computed, and the smallest distance is used. This method requires no
precomputed data structure, and is the fastest for a small number of points in ObjectModel3DTo.
KD-Tree: The points in ObjectModel3DTo are organized in a KD-Tree, which speeds up the search
for the closest point. The construction of the tree is very efficient. The search time is approximately
logarithmic to the number of points in ObjectModel3DTo. However, the search time is not constant,
and can vary significantly depending on the position of the query points in ObjectModel3DFrom.
Voxel: The points in ObjectModel3DTo are organized in a voxel structure. This voxel structure al-
lows searching in almost constant time, i.e., independent from the position of the query points in
ObjectModel3DFrom and the number of points in ObjectModel3DTo.
Note that the preparation of this data structure takes several seconds or minutes. However, it is possible
to perform a precomputation using prepare_object_model_3d on ObjectModel3DTo with
Purpose set to ’distance_computation’.
Distance to triangles: For computing the distances to triangles, the following methods are supported:
Linear search: For each point in ObjectModel3DFrom, the distances to all triangles in
ObjectModel3DTo are computed, and the smallest distance is used. This method requires no
precomputed data structure, and is the fastest for a small number of triangles in ObjectModel3DTo.
KD-Tree: The triangles in ObjectModel3DTo are organized in a KD-Tree, which speeds up the search
for the closest triangle. The construction of the tree is efficient. The search time is approximately log-
arithmic to the number of triangles in ObjectModel3DTo. However, the search time is not constant,
and can vary significantly depending on the position of the query points in ObjectModel3DFrom.
Voxel: The triangles in ObjectModel3DTo are organized in a voxel structure. This voxel structure
allows searching in almost constant time, i.e., independent from the position of the query points in
ObjectModel3DFrom and the number of triangles in ObjectModel3DTo.
Note that the preparation of this data structure takes several seconds or minutes. However, it is possible
to perform a precomputation using prepare_object_model_3d on ObjectModel3DTo with
Purpose set to ’distance_computation’. For creating the voxel data structure, the triangles are sampled.
The corresponding sampling distance can be set with the generic parameters ’sampling_dist_rel’ and
’sampling_dist_abs’.
By default, a relative sampling distance of 0.03 is used. See below for a more detailed description of
the sampling distance. Note that this data structure is only approximate. It is possible that some of the
distances are off by around 10% of the sampling distance. In these cases, the returned distances will
always be larger than the actual distances.
Distance to primitive: Since ObjectModel3DTo can contain only one primitive, the distances from the query
points to this primitive are computed linearly. The creation or usage of a data structure is not possible.
Note that computing the distance to primitive planes fitted with segment_object_model_3d or
fit_primitives_object_model_3d can be slow, since those planes contain a complex con-
vex hull of the points that were used to fit the plane. If only the distance to the plane is re-
quired, and the boundary should be ignored, it is recommended to obtain the plane pose using
get_object_model_3d_params with parameter ’primitive_parameter_pose’ and create a new plane
using gen_plane_object_model_3d.
The following table lists the different target data types, methods, and their properties. The search time is the approx-
imate time per point in ObjectModel3DFrom. N is the number of target elements in ObjectModel3DTo.
Additionally to the parameters described above, the following parameters can be set to influence the distance com-
putation. If desired, these parameters and their corresponding values can be specified by using GenParamName
and GenParamValue, respectively. All of the following parameters are optional.
’distance_to’ This parameter can be used to explicitly set the target data to which the distances are computed.
HALCON 24.11.1.0
186 CHAPTER 4 3D OBJECT MODEL
’auto’ (Default) Automatically set the target data. The following list of attributes is queried, and the first
appearing attribute from the list is used as target data: Primitive, Triangle, Point.
’primitive’ Compute the distance to the primitive contained in ObjectModel3DTo.
’triangles’ Compute the distance to the triangles contained in ObjectModel3DTo.
’points’ Compute the distance to the points contained in ObjectModel3DTo.
’method’ This parameter can be used to explicitly set the method to be used for the distance computation. Note
that not all methods are available for all target data types. For the list of possible pairs of target data type and
method, see above.
’auto’ (Default) Use the default method for the used target data type.
’linear’ Use a linear search for computing the distances.
’kd-tree’ Use a KD-Tree for computing the distances.
’voxel’ Use a voxel structure for computing the distances.
’invert_pose’ This parameter can be used to invert the pose given in Pose.
’false’ (Default) The pose is not inverted.
’true’ The pose is inverted.
’output_attribute’ This parameter can be used to set the name of the attribute in which the distances are stored.
By default, the distances are stored in an extended attribute named ’&distance’ in ObjectModel3DFrom.
However, if the same 3D object model is used for different calls of this operator, the result of the previous call
would be overwritten. This can be avoided by changing the name of the extended attribute. Valid extended
attribute names start with a ’&’.
’sampling_dist_rel’, ’sampling_dist_abs’ These parameters are used when computing the distances to triangles
using the voxel method. For this, the triangles need to be sampled. The sampling distance can be set either in
absolute terms, using ’sampling_dist_abs’, or relative to the diameter of the axis aligned bounding box, using
’sampling_dist_rel’. By default, ’sampling_dist_rel’ is set to 0.03. Only one of the two parameters can be set.
The diameter of the axis aligned bounding box can be queried using get_object_model_3d_params.
Note that the creation of the voxel data structure is very time consuming, and is usually performed offline
using prepare_object_model_3d (see above).
’store_closest_index’ This parameter can be used to return the index of the closest point or triangle in the extended
attribute ’&closest_index’.
’false’ (Default) The index is not returned.
’true’ The index is returned.
’signed_distances’ This parameter can be used to calculate signed distances of the points in the 3D object model
ObjectModel3DFrom to the points, triangles or primitive in the 3D object model ObjectModel3DTo.
’false’ (Default) Unsigned distances are returned.
’true’ Signed distances are returned.
Dependent on the available target data (points, triangles or primitive) the following particularities have to be
considered:
Distance to points: The computation of signed distances is only supported for the methods ’kd-tree’ and
’linear’. However, signed distances can only be calculated if point normals are available for the points
in the 3D object model or attached via the operator set_object_model_3d_attrib_mod.
Distance to triangles: Signed distances can be calculated for all methods listed above. The operator returns
a negative distance, if the dot product with the normal vector of the triangle is less than zero. Otherwise,
the distance is positive.
Distance to primitive: When calculating signed distances to a cylindrical, spherical or box-shaped primi-
tive, the points of the 3D object model ObjectModel3DFrom inside the primitive obtain a negative
distance, whereas all others have a positive distance. When calculating signed distances to planes, all
points beneath the plane obtain a negative distance, whereas all others have a positive one.
Parameters
. ObjectModel3DFrom (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the source 3D object model.
. ObjectModel3DTo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the target 3D object model.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the source 3D object model in the target 3D object model.
Default: []
. MaxDistance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Maximum distance of interest.
Default: 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic input parameters.
Default: []
List of values: GenParamName ∈ {’distance_to’, ’method’, ’invert_pose’, ’output_attribute’,
’sampling_dist_rel’, ’sampling_dist_abs’, ’signed_distances’, ’store_closest_index’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Values of the generic input parameters.
Default: []
List of values: GenParamValue ∈ {’auto’, ’triangles’, ’points’, ’polygons’, ’primitive’, ’kd-tree’, ’voxel’,
’linear’, ’true’, ’false’}
Result
distance_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an ex-
ception is raised.
Execution Information
get_object_model_3d_params ( : : ObjectModel3D,
GenParamName : GenParamValue )
HALCON 24.11.1.0
188 CHAPTER 4 3D OBJECT MODEL
’num_points’, ’num_triangles’, ’num_polygons’, or ’num_lines’. Thus, to get the length of the standard attribute
’point_coord_x’, set GenParamName to ’num_points’.
Standard attributes
The following standard attributes and meta data can be accessed:
’point_coord_x’: The x-coordinates of the set of the 3D points (length can be queried by ’num_points’).
This attribute is obtained typically from the operator xyz_to_object_model_3d or
read_object_model_3d.
’point_coord_y’: The y-coordinates of the set of the 3D points (length can be queried by ’num_points’).
This attribute is obtained typically from the operator xyz_to_object_model_3d or
read_object_model_3d.
’point_coord_z’: The z-coordinates of the set of the 3D points (length can be queried by ’num_points’).
This attribute is obtained typically from the operator xyz_to_object_model_3d or
read_object_model_3d.
’point_normal_x’: The x-components of 3D point normals of the set of the 3D points (length can be queried by
’num_points’). This attribute is obtained typically from the operator smooth_object_model_3d.
’point_normal_y’: The y-components of 3D point normals of the set of the 3D points (length can be queried by
’num_points’). This attribute is obtained typically from the operator smooth_object_model_3d.
’point_normal_z’: The z-components of 3D point normals of the set of the 3D points (length can be queried by
’num_points’). This attribute is obtained typically from the operator smooth_object_model_3d.
’mapping_row’: The row-components of the 2D mapping of the set of 3D points. (length can be queried by
’num_points’, height of the original image can be queried by ’mapping_size’). This attribute is obtained
typically from the operator xyz_to_object_model_3d.
’mapping_col’: The column-components of the 2D mapping of the set of 3D points. (length can be queried by
’num_points’, width of the original image can be queried by ’mapping_size’). This attribute is obtained
typically from the operator xyz_to_object_model_3d.
’mapping_size’: The size of the original image. A tuple with the two entries width and height is returned.
’triangles’: The indices of the 3D points that represent triangles in the following order: The first three values
(return values 0,1,2) represent the first triangle. The next three values (return values 3,4,5) represent the
second triangle etc. All indices correspond to the coordinates of the 3D points. Access to the coordinates of
the 3D points is possible, e.g., with the generic parameter GenParamName set to the values ’point_coord_x’,
’point_coord_y’, and ’point_coord_z’, respectively. The length of this attribute corresponds to three times the
number of triangles, which can be queried using ’num_triangles’. This attribute is obtained typically from
the operator triangulate_object_model_3d or read_object_model_3d.
’polygons’: The indices of the 3D points that represent polygons in the following order: The first return value
contains the number n of the points of the first polygon. The following values (return values 1,2,..,n) represent
the indices of the points of the first polygon. The next value (return value n+1) contains the number m of the
points of the second polygon. The following m return values (return values n+2,n+3,..,n+1+m) represent the
indices of the points of the second polygon etc. All indices correspond to the coordinates of the 3D points.
Access to the coordinates of the 3D points is possible, e.g., with the generic parameter GenParamName set
to the values ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’, respectively. The number of polygons
per 3D object model can be queried using ’num_polygons’. This attribute is obtained typically from the
operator read_object_model_3d.
’lines’: The indices of the 3D points that represent polylines in the following order: The first return value con-
tains the number n of points of the first polyline. The following values (return values 1,2,..,n) represent
the indices of the points of the first polyline. The next value (return value n+1) contains the number m of
points of the second polyline. The following m values (return values n+2,n+3,..,n+1+m) represent the in-
dices of the points of the second polyline etc. All indices correspond to the coordinates of the 3D points.
Access to the coordinates of the 3D points is possible, e.g., with the generic parameter GenParamName
set to the values ’point_coord_x’, ’point_coord_y’, and ’point_coord_z’, respectively. The number of lines
per 3D object model can be queried using ’num_lines’. This attribute is obtained typically from the operator
intersect_plane_object_model_3d.
’diameter_axis_aligned_bounding_box’: The diameter of the set of 3D points, defined as the length of the diagonal
of the smallest enclosing axis-parallel cuboid (see parameter ’bounding_box1’). This attribute has length 1.
’center’: 3D coordinates of the center of the set of 3D points. These coordinates are the center of the smallest
enclosing axis-parallel cuboid (see parameter ’bounding_box1’). This attribute has length 3. If there are no
3D coordinates in the 3D object model the following rules are valid:
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d) and
there are extensions, the center point between the extensions are returned. If there are no extensions the
translation parameters of the pose are returned.
If the 3D object model is a primitive of type plane (see gen_plane_object_model_3d) and there are
extensions, the center of gravity of the plane is computed from the extensions. If there are no extensions the
translation parameters of the pose are returned.
If the 3D object model is a primitive of type sphere or box (see gen_sphere_object_model_3d or
gen_box_object_model_3d), the center point of the object model is returned.
’primitive_type’: The primitive type (e.g., obtained from the operator
fit_primitives_object_model_3d). The return value of a sphere is ’sphere’. The return
value of a cylinder is ’cylinder’. The return value of a plane is ’plane’. The return value of a box is ’box’.
This attribute has length 1.
’primitive_parameter’: The parameters of the primitive (e.g., obtained from the operator
fit_primitives_object_model_3d). The length of this attribute depends on ’primitive_type’
and is between 4 and 10 for each 3D object model.
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d), the
return values are the (x-, y-, z-)coordinates of the center [x_center, y_center, z_center], the
normed (x-, y-, z-)directions of the main axis of the cylinder [x_axis, y_axis, z_axis], and the ra-
dius [radius] of the cylinder. The order is [x_center, y_center, z_center, x_axis, y_axis,
z_axis, radius].
If the 3D object model is a primitive of type sphere (see gen_sphere_object_model_3d), the return
values are the (x-, y-, z-)coordinates of the center [x_center, y_center, z_center] and the radius
[radius] of the sphere. The order is [x_center, y_center, z_center, radius].
If the 3D object model is a primitive of type plane (see gen_plane_object_model_3d), the 4 pa-
rameters of the hessian normal form are returned, i.e., the unit normal (x-, y-, z-) vector [x, y, z] and the
orthogonal distance (d) of the plane from the origin of the coordinate system. The order is [x, y, z, d]. The
sign of the distance (d) determines the side of the plane on which the origin is located.
If the 3D object model is a primitive of type box (gen_box_object_model_3d), the return values are
the 3D pose (translation, rotation, type of the rotation) and the half edge lengths (length1, length2,
length3) of the box. length1 is the length of the box along the x axis of the pose. length2 is the
length of the box along the y axis of the pose. length3 is the length of the box along the z axis of the
pose. The order is [trans_x, trans_y, trans_z, rot_x, rot_y, rot_z, rot_type, length1,
length2, length3]. For details about 3D poses and the corresponding transformation matrices see the
operator create_pose.
’primitive_parameter_pose’: The parameters of the primitive with format of a 3D pose (e.g., obtained from the
operator fit_primitives_object_model_3d). For all types of primitives the return values are the
3D pose (translation, rotation, type of the rotation). For details about 3D poses and the corresponding trans-
formation matrices see the operator create_pose. The length of this attribute depends on ’primitive_type’
and is between 7 and 10 for each 3D object model.
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d), addi-
tionally, the radius [radius] of the cylinder is returned. The order is [trans_x, trans_y, trans_z,
rot_x, rot_y, rot_z, rot_type, radius].
If the 3D object model is a primitive of type sphere (see gen_sphere_object_model_3d), additionally,
the radius [radius] of the sphere is returned. The order is [trans_x, trans_y, trans_z, rot_x,
rot_y, rot_z, rot_type, radius].
If the 3D object model is a primitive of type plane (see gen_plane_object_model_3d), the order is
[trans_x, trans_y, trans_z, rot_x, rot_y, rot_z, rot_type].
If the 3D object model is a primitive of type box (see gen_box_object_model_3d), additionally the
half edge lengths (length1, length2, length3) of the box are returned. length1 is the length of the
box along the x axis of the pose. length2 is the length of the box along the y axis of the pose. length3
is the length of the box along the z axis of the pose. The order is [trans_x, trans_y, trans_z, rot_x,
rot_y, rot_z, rot_type, length1, length2, length3].
’primitive_pose’: The parameters of the primitive with format of a 3D pose (e.g., obtained from the operator
fit_primitives_object_model_3d). For all types of primitives the return values are the 3D pose
HALCON 24.11.1.0
190 CHAPTER 4 3D OBJECT MODEL
(translation, rotation, type of the rotation). For details about 3D poses and the corresponding transformation
matrices see the operator create_pose. The length of this attribute is 7 for each 3D object model. The
order is [trans_x, trans_y, trans_z, rot_x, rot_y, rot_z, rot_type].
’primitive_parameter_extension’: The extents of the primitive of type cylinder and plane (e.g., obtained from
the operator fit_primitives_object_model_3d). The length of this attribute depends on ’primi-
tive_type’ and can be queried using ’num_primitive_parameter_extension’.
If the 3D object model is a primitive of type cylinder (see gen_cylinder_object_model_3d), the
return values are the extents (MinExtent, MaxExtent) of the cylinder. They are returned in the order [MinEx-
tent, MaxExtent]. MinExtent represents the length of the cylinder in negative direction of the rotation axis.
MaxExtent represents the length of the cylinder in positive direction of the rotation axis.
If the 3D object model is a primitive of type plane (created using
fit_primitives_object_model_3d), the return value is a tuple of co-planar points regarding
the fitted plane. The order is [x coordinate of point 1, x coordinate of point 2, x coordinate of point 3, ...., y
coordinate of point 1, y coordinate of point 2, y coordinate of point 3, ....]. The coordinate values describe
the support points of a convex hull. This is computed based on the projections of those points on the fitted
plane which contribute to the fitting. If the plane was created using gen_plane_object_model_3d, all
points that were used to create the plane (XExtent, YExtent) are returned.
’primitive_rms’: The quadratic residual error of the primitive (e.g., obtained from the operator
fit_primitives_object_model_3d). This attribute has length 1.
’reference_point’: 3D coordinates of the reference point of the prepared 3D shape model for shape-based 3D
matching. The reference point is the center of the smallest enclosing axis-parallel cuboid (see parameter
’bounding_box1’). This attribute has length 3.
’bounding_box1’: Smallest enclosing axis-parallel cuboid (min_x, min_y, min_z, max_x, max_y, max_z). This
attribute has length 6.
’num_points’: The number of points. This attribute has length 1.
’num_triangles’: The number of triangles. This attribute has length 1.
’num_polygons’: The number of polygons. This attribute has length 1.
’num_lines’: The number of polylines. This attribute has length 1.
’num_primitive_parameter_extension’: The number of extended data of primitives. This attribute has length 1.
’has_points’: The existence of 3D points. This attribute has length 1.
’has_point_normals’: The existence of 3D point normals. This attribute has length 1.
’has_triangles’: The existence of triangles. This attribute has length 1.
’has_polygons’: The existence of polygons. This attribute has length 1.
’has_lines’: The existence of lines. This attribute has length 1.
’has_xyz_mapping’: The existence of a mapping of the 3D points to image coordinates. This attribute has length
1.
’has_shape_based_matching_3d_data’: The existence of a shape model for shape-based 3D matching. This at-
tribute has length 1.
’has_distance_computation_data’: The existence of a precomputed data structure for 3D distance computation.
This attribute has length 1. The data structure can be created with prepare_object_model_3d using
the purpose ’distance_computation’. It is used by the operator distance_object_model_3d.
’has_surface_based_matching_data’: The existence of data for the surface-based matching. This attribute has
length 1.
’has_segmentation_data’: The existence of data for a 3D segmentation. This attribute has length 1.
’has_primitive_data’: The existence of a primitive. This attribute has length 1.
’has_primitive_rms’: The existence of a quadratic residual error of a primitive. This attribute has length 1.
’neighbor_distance’:
’neighbor_distance N’: For every point the distance of the N-th nearest point. N must be a positive integer and is
by default 25. For every point, all other points are sorted according to their distance and the distance of the
N-th point is returned.
’num_neighbors X’: For every point the number of neighbors within a distance of at most X.
’num_neighbors_fast X’: For every point the approximate number of neighbors within a distance of at most X.
The distances are approximated using voxels, leading to a faster processing compared to ’num_neighbors’.
Extended attributes
Extended attributes are attributes, that can be derived from standard attributes by special operators (e.g.,
distance_object_model_3d), or user-defined attributes. User-defined attributes can be created by the op-
erator set_object_model_3d_attrib. The following extended attributes and meta data can be accessed:
’extended_attribute_names’: The names of all extended attributes. For each extended attribute name a value is
returned.
’extended_attribute_types’: The type of all extended attributes. For each extended attribute type a value is re-
turned, thereby the values are sorted as the output for the extended attribute names.
’has_extended_attribute’: The existence of at least one extended attribute. For each 3D object model a value is
returned.
’num_extended_attribute’: The number of extended attributes. For each 3D object model a value is returned.
’&attribute_name’: The values stored under a user-defined extended attribute. Note that this name must start with
’&’, e.g., ’&my_attrib’. The data of the requested extended attributes are returned in GenParamValue.
The order in which the data is returned is the same as the order of the attribute names specified in
GenParamName.
’original_point_indices’: Indices of the 3D points in a different 3D object model (length can
be queried by ’num_points’). This attribute is obtained typically from the operator
triangulate_object_model_3d.
’score’: The score of the set of the 3D points (length can be queried by ’num_points’). This attribute is obtained
typically from the operator reconstruct_surface_stereo.
’red’: The red channel of the set of the 3D points (length can be queried by ’num_points’). This attribute is
obtained typically from the operator reconstruct_surface_stereo.
’green’: The green channel of the set of the 3D points (length can be queried by ’num_points’). This attribute is
obtained typically from the operator reconstruct_surface_stereo.
’blue’: The blue channel of the set of the 3D points (length can be queried by ’num_points’). This attribute is
obtained typically from the operator reconstruct_surface_stereo.
’edge_dir_x’: The x-component of a vector that is perpendicular to the edge direction and the viewing direction.
This attribute is obtained typically from the operator edges_object_model_3d
’edge_dir_y’: The y-component of a vector that is perpendicular to the edge direction and the viewing direction.
This attribute is obtained typically from the operator edges_object_model_3d
’edge_dir_z’: The z-component of a vector that is perpendicular to the edge direction and the viewing direction.
This attribute is obtained typically from the operator edges_object_model_3d
’edge_amplitude’: Contains the amplitude of edge points. This attribute is obtained typically from the operator
edges_object_model_3d
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic attributes that are queried for the 3D object model.
Default: ’num_points’
List of values: GenParamName ∈ {’point_coord_x’, ’point_coord_y’, ’point_coord_z’, ’point_normal_x’,
’point_normal_y’, ’point_normal_z’, ’mapping_row’, ’mapping_col’, ’mapping_size’, ’triangles’, ’polygons’,
’lines’, ’diameter_axis_aligned_bounding_box’, ’center’, ’primitive_type’, ’primitive_rms’,
’primitive_parameter’, ’primitive_parameter_pose’, ’primitive_pose’, ’primitive_parameter_extension’,
’reference_point’, ’bounding_box1’, ’num_points’, ’num_triangles’, ’num_polygons’, ’num_lines’,
’num_primitive_parameter_extension’, ’has_points’, ’has_point_normals’, ’has_triangles’, ’has_polygons’,
’has_lines’, ’has_xyz_mapping’, ’has_shape_based_matching_3d_data’, ’has_surface_based_matching_data’,
’has_segmentation_data’, ’has_primitive_data’, ’has_primitive_rms’, ’extended_attribute_names’,
’extended_attribute_types’, ’has_extended_attribute’, ’num_extended_attribute’,
’has_distance_computation_data’, ’red’, ’green’, ’blue’, ’score’, ’neighbor_distance’, ’num_neighbors’,
’num_neighbors_fast’, ’original_point_indices’, ’edge_amplitude’, ’edge_dir_x’, ’edge_dir_y’, ’edge_dir_z’}
HALCON 24.11.1.0
192 CHAPTER 4 3D OBJECT MODEL
Result
max_diameter_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information
Possible Predecessors
read_object_model_3d, connection_object_model_3d
Possible Successors
select_object_model_3d
See also
volume_object_model_3d_relative_to_plane, area_object_model_3d,
moments_object_model_3d
Module
3D Metrology
moments_object_model_3d ( : : ObjectModel3D,
MomentsToCalculate : Moments )
Calculates the mean or the central moment of second order for a 3D object model.
moments_object_model_3d calculates the mean or the central moment of second order for a 3D
object model. To calculate the mean of the points of the 3D object model, select ’mean_points’ in
MomentsToCalculate. If instead the central moment of second order should be calculated, select ’cen-
tral_moment_2_points’. The results are the variances of the x, y, z, x-y, x-z, and y-z axes. To compute the
three principal axes of the 3D object model select ’principal_axes’ in MomentsToCalculate. The result is a
pose with the mean of the points as center. The coordinate system that corresponds to the pose has the x-axis along
the first principal axis, the y-axis along the second principal axis and the z-axis along the third principal axis.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. MomentsToCalculate (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string
Moment to calculate.
Default: ’mean_points’
List of values: MomentsToCalculate ∈ {’mean_points’, ’central_moment_2_points’, ’principal_axes’}
. Moments (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Calculated moment.
Number of elements: Moments == ObjectModel3D
Example
Result
moments_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised.
Execution Information
Possible Predecessors
read_object_model_3d, connection_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz, select_object_model_3d
HALCON 24.11.1.0
194 CHAPTER 4 3D OBJECT MODEL
See also
volume_object_model_3d_relative_to_plane
Module
3D Metrology
Select 3D object models from an array of 3D object models according to global features.
select_object_model_3d selects 3D object models from an array of 3D object models for which the values
of specified global features lie within a specified range. The list of possible features that may be specified in
Feature are:
For all features listed in Feature a minimal and maximal threshold must be specified in MinValue and
MaxValue. This range is then used to select all given 3D object models that fulfill the given conditions. These
are copied to ObjectModel3DSelected. For logical parameters (e.g., ’has_points’, ’has_point_normals’,
...), MinValue and MaxValue can both be set to ’true’ to select all 3D object models that have the respective
attribute or to ’false’ to select all that do not have it. MinValue and MaxValue can be set to ’min’ and ’max’
accordingly to ignore the respective threshold.
The parameter Operation defines the logical operation that is used to combine different features in Feature.
It can be either a logical ’or’ or ’and’.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the available 3D object models to select.
. Feature (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
List of features a test is performed on.
Default: ’has_triangles’
List of values: Feature ∈ {’mean_points_x’, ’mean_points_y’, ’mean_points_z’, ’volume’,
’volume_axis_aligned_bounding_box’, ’central_moment_2_x’, ’central_moment_2_y’,
’central_moment_2_z’, ’central_moment_2_xy’, ’central_moment_2_xz’, ’central_moment_2_yz’,
’diameter_axis_aligned_bounding_box’, ’diameter_bounding_box’, ’diameter_object’, ’area’, ’has_points’,
’has_triangles’, ’has_faces’, ’has_lines’, ’has_xyz_mapping’, ’has_point_normals’,
’has_shape_based_matching_3d_data’, ’has_surface_based_matching_data’, ’has_segmentation_data’,
’has_primitive_data’, ’num_points’, ’num_triangles’, ’num_faces’, ’num_lines’}
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Logical operation to combine the features given in Feature.
Default: ’and’
List of values: Operation ∈ {’and’, ’or’}
. MinValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Minimum value for the given feature.
Default: 1
Suggested values: MinValue ∈ {0, 1, 100, 0.1, ’true’, ’false’, ’min’}
. MaxValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Maximum value for the given feature.
Default: 1
Suggested values: MaxValue ∈ {0, 1, 10, 100, 0.1, ’true’, ’false’, ’max’}
. ObjectModel3DSelected (output_control) . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
A subset of ObjectModel3D fulfilling the given conditions.
Example
Result
select_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information
HALCON 24.11.1.0
196 CHAPTER 4 3D OBJECT MODEL
connection_object_model_3d, get_object_model_3d_params,
volume_object_model_3d_relative_to_plane, area_object_model_3d,
max_diameter_object_model_3d, moments_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz
See also
volume_object_model_3d_relative_to_plane, area_object_model_3d,
max_diameter_object_model_3d, moments_object_model_3d,
get_object_model_3d_params
Module
3D Metrology
smallest_bounding_box_object_model_3d ( : : ObjectModel3D,
Type : Pose, Length1, Length2, Length3 )
Calculate the smallest bounding box around the points of a 3D object model.
smallest_bounding_box_object_model_3d calculates the smallest bounding box around the points of
a 3D object model. The resulting bounding box is described using its coordinate system (Pose), which is oriented
such that the longest side of the box is aligned with the x-axis, the second longest side is aligned with the y-axis
and the smallest side is aligned with the z-axis. The lengths of the sides are returned in Length1, Length2, and
Length3, in descending order. The box can be either axis-aligned or oriented, which can be chosen by the Type.
The algorithm for ’oriented’ is computationally significantly more costly than the algorithm for ’axis_aligned’,
and returns only an approximation of the oriented bounding box. Note that the algorithm for the oriented bounding
box is randomized and can return a different box for each call.
In order to retrieve the corners of the ’axis_aligned’ box, the operator get_object_model_3d_params can
be used with the parameter ’bounding_box1’.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The method that is used to estimate the smallest box.
Default: ’oriented’
List of values: Type ∈ {’oriented’, ’axis_aligned’}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
The pose that describes the position and orientation of the box that is generated. The pose has its origin in the
center of the box and is oriented such that the x-axis is aligned with the longest side of the box.
. Length1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the longest side of the box.
Number of elements: Length1 == ObjectModel3D
. Length2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the second longest side of the box.
Number of elements: Length2 == ObjectModel3D
. Length3 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
The length of the third longest side of the box.
Number of elements: Length3 == ObjectModel3D
Example
Result
smallest_bounding_box_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If
necessary, an exception is raised.
Execution Information
Result
smallest_sphere_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information
HALCON 24.11.1.0
198 CHAPTER 4 3D OBJECT MODEL
volume_object_model_3d_relative_to_plane ( : : ObjectModel3D,
Plane, Mode, UseFaceOrientation : Volume )
’signed’ (default) The volumes above and below the plane are added.
’unsigned’ The volume below the plane is subtracted from the volume above the plane.
’positive’ Only faces above the plane are taken into account.
’negative’ Only faces below the plane are taken into account.
’true’ (default) Use the orientation of the faces relative to the plane. A face points away from the plane if the
corner points are ordered clockwise when viewed from the plane. The volume under a face is considered
positive if the orientation of the face is away from the plane. In contrast, it is considered negative if the
orientation of the face is towards the plane.
’false’ The volume under a face is considered positive if the face is located above the plane. In contrast, it is
considered negative if the face is located below the plane.
For example, with the default combination (Mode: ’signed’, UseFaceOrientation: ’true’), you can approxi-
mate the real volume of a closed object. In this case, the Plane is still required, but does not change the resulting
volume.
Attention
The calculation of the volume might be numerically unstable in case of a large distance between the plane and the
object (approx. distance > 10000 times the object diameter).
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Plane (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Pose of the plane.
Default: [0,0,0,0,0,0,0]
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Method to combine volumes laying above and below the reference plane.
Default: ’signed’
List of values: Mode ∈ {’positive’, ’negative’, ’unsigned’, ’signed’}
. UseFaceOrientation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Decides whether the orientation of a face should affect the resulting sign of the underlying volume.
Default: ’true’
List of values: UseFaceOrientation ∈ {’true’, ’false’}
. Volume (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Absolute value of the calculated volume.
Number of elements: Volume == ObjectModel3D
Example
Result
volume_object_model_3d_relative_to_plane returns 2 (H_MSG_TRUE) if all parameters are cor-
rect. If necessary, an exception is raised.
HALCON 24.11.1.0
200 CHAPTER 4 3D OBJECT MODEL
Execution Information
4.3 Segmentation
’primitive_type’: The parameter specifies which type of 3D primitive should be fitted into the set of 3D points.
You can specify a specific primitive type by setting ’primitive_type’ to ’cylinder’, ’sphere’, or ’plane’. Then,
only the selected type of 3D primitive is fitted into the set of 3D points. You can also specify a set of specific
3D primitives that should be fitted by setting ’primitive_type’ to a tuple consisting of different primitive types.
If all types of 3D primitives should be fitted, you can set ’primitive_type’ to ’all’. Note that if more than one
primitive type is selected, only the best fitting 3D primitive, i.e., the 3D primitive with the smallest quadratic
residual error, is returned.
List of values: ’cylinder’, ’sphere’, ’plane’, ’all’
Default: ’cylinder’
’fitting_algorithm’: The parameter specifies the used algorithm for the fitting of the 3D primitive. When fitting
a plane, the results are identical for the different algorithms. If ’fitting_algorithm’ is set to ’least_squares’,
the approach minimizes the quadratic distance between the 3D points and the resulting primitive. If ’fit-
ting_algorithm’ is set to ’least_squares_huber’, the approach is similar to ’least_squares’, but the points are
weighted to decrease the impact of outliers based on the approach of Huber (see below). If ’fitting_algorithm’
is set to ’least_squares_tukey’, the approach is also similar to ’least_squares’, but the points are weighted
and outliers are ignored based on the approach of Tukey (see below).
For ’least_squares_huber’ and ’least_squares_tukey’ a robust error statistics is used to estimate the standard
deviation of the distances from the object points without outliers from the fitting primitive. The Tukey
algorithm removes outliers, whereas the Huber algorithm only damps them, or more precisely, weights them
linearly. In practice, the approach of Tukey is recommended.
List of values: ’least_squares’, ’least_squares_huber’, ’least_squares_tukey’
Default: ’least_squares’
’min_radius’: The parameter specifies the minimum radius of a cylinder or a sphere. If a cylinder or a sphere with
a smaller radius is fitted, the resulting 3D object model is empty. The parameter is ignored when fitting a
plane. The unit is meter.
Suggested values: 0.01, 0.02, 0.1
Default: 0.01
’max_radius’: The parameter specifies the maximum radius of a cylinder or a sphere. If a cylinder or a sphere
with a larger radius is fitted, the resulting 3D object model is empty. The parameter is ignored when fitting a
plane. The unit is meter.
Suggested values: 0.02, 0.04, 0.2
Default: 0.2
’output_point_coord’: The parameter determines if the 3D points used for the fitting are copied to the output 3D
object model. If ’copy_point_coord’ is set to ’true’, the 3D points are copied. If ’copy_point_coord’ is set to
’false’, no 3D points are copied.
List of values: ’true’, ’false’
Default: ’true’
’output_xyz_mapping’: The parameter determines if a mapping from the 3D points to image coordinates is
copied to the output 3D object model. This information is needed, e.g., when using the operator
object_model_3d_to_xyz after the fitting (e.g., for a visualization). If ’output_xyz_mapping’ is set
to ’true’, the image coordinate mapping is copied. Note that the parameter is only valid, if the image coor-
dinate mapping is available in the input 3D object model. Make sure that, if you derive the input 3D object
model by copying it with the operator copy_object_model_3d from a 3D object model that contains
such a mapping, the mapping is copied, too. Furthermore, the parameter is only valid, if the 3D points are
copied to the output 3D object model, which is set with the parameter ’output_point_coord’.
List of values: ’true’, ’false’
Default: ’false’
The minimum number of 3D points that are necessary to fit a plane is three. The minimum number of 3D points
that is necessary to fit a sphere is four. The minimum number of 3D points that is necessary to fit a cylinder is five.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the input 3D object model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Number of elements: GenParamName == GenParamValue
List of values: GenParamName ∈ {’primitive_type’, ’fitting_algorithm’, ’min_radius’, ’max_radius’,
’output_point_coord’, ’output_xyz_mapping’}
HALCON 24.11.1.0
202 CHAPTER 4 3D OBJECT MODEL
Remove points from a 3D object model by projecting it to a virtual view and removing all points outside of a given
region.
reduce_object_model_3d_by_view projects the points of ObjectModel3D into the image plane given
by Pose and CamParam and reduces the 3D object model to the points lying inside the region given in Region.
In particular, the points are first transformed with the pose and then projected using the camera parameters. Only
those points that are located inside the specified region are copied to the new 3D object model.
Faces of a mesh are only contained in the output 3D object model if all corner points are within the region.
As alternative to camera parameters and a pose, an XYZ-mapping contained in ObjectModel3D can be used for
the reduction. For this, CamParam must be set to ’xyz_mapping’ or an empty tuple and an empty tuple must be
passed to Pose. In this case, the original image coordinates of the 3D points are used to check if a point is inside
Region.
Attention
Cameras with hypercentric lenses are not supported.
Parameters
. Region (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region(-array) ; object
Region in the image plane.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
Suggested values: CamParam ∈ {’xyz_mapping’, []}
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: Pose == 7
gen_object_model_3d_from_points (200*(rand(100)-0.5), \
200*(rand(100)-0.5), \
200*(rand(100)-0.5), ObjectModel3D)
gen_circle (Circle, 240, 320, 60)
CamParam := ['area_scan_telecentric_division',1,0,1,1,320,240,640,480]
Pose := [0,0,1,0,0,0,0]
reduce_object_model_3d_by_view (Circle, ObjectModel3D, CamParam, \
Pose, ObjectModel3DReduced)
dev_get_window (WindowHandle)
visualize_object_model_3d (WindowHandle, [ObjectModel3D, \
ObjectModel3DReduced], CamParam, Pose, \
['color_0', 'point_size_1'], ['blue',6], \
[], [], [], PoseOut)
Result
reduce_object_model_3d_by_view returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised.
Execution Information
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz
See also
select_points_object_model_3d
Module
3D Metrology
HALCON 24.11.1.0
204 CHAPTER 4 3D OBJECT MODEL
To control the segmentation and the fitting, you can adjust some generic parameters within GenParamName and
GenParamValue. But note that for a lot of applications the default values are sufficient and no adjustment is
necessary. The following values for GenParamName and GenParamValue are possible:
’max_orientation_diff’: The parameter specifies the maximum angle between the point normals of two neighbored
3D points (in radians) that is allowed so that the two points belong to the same sub-set of 3D points. For a
cylinder or sphere, the parameter value depends on the dimension of the object and on the distance of the
neighbored 3D points. I.e., if the cylinder or sphere has a very small radius or if the 3D points are not very
dense, the value must be chosen higher. For a plane the value is independent from the dimension of the object
and can be set to a small value.
Suggested values: 0.10, 0.15, 0.20
Default: 0.15
’max_curvature_diff’: The parameter specifies the maximum difference between the curvatures of the surface at
the positions of two neighbored 3D points that is allowed so that the two points belong to the same sub-set
of 3D points. The value depends on the noise of the 3D points. I.e., if the noise level of the 3D points is very
high, the value must be set to a higher value. Generally, the number of resulting 3D object models decreases
for a higher value, because more 3D points are merged to a sub-set of 3D points.
Suggested values: 0.03, 0.04, 0.05
Default: 0.05
’min_area’: The parameter specifies the minimum number of 3D points needed for a sub-set of connected 3D
points to be returned by the segmentation. Thus, for a sub-set with fewer points the points are deleted and no
output handle is created.
Suggested values: 1, 10, 100
Default: 100
’fitting’: The parameter specifies whether after the segmentation 3D primitives are fitted into the sub-sets
of 3D points. If ’fitting’ is set to ’true’, which is the default, the fitting is calculated and the
3D object models with the resulting handles contain the parameters of the corresponding 3D prim-
itives. The output parameters of a cylinder, a sphere, or a plane are described with the operator
fit_primitives_object_model_3d. If ’fitting’ is set to ’false’, only a segmentation is performed
and the output 3D object models contain the segmented sub-sets of 3D points. A later fitting can be performed
with the operator fit_primitives_object_model_3d.
List of values: ’false’, ’true’
Default: ’true’
’output_xyz_mapping’: The parameter determines if a mapping from the segmented 3D points to image coordi-
nates is copied to the output 3D object model. This information is needed, e.g., when using the operator
object_model_3d_to_xyz after the segmentation (e.g., for a visualization). If ’output_xyz_mapping’
is set to ’true’, the image coordinate mapping is copied. Note that the parameter is only valid, if the im-
age coordinate mapping is available in the input 3D object model. Make sure that, if you derive the input
3D object model by copying it with the operator copy_object_model_3d from a 3D object model that
contains such a mapping, the mapping is copied, too. Furthermore, the parameter is only valid, if the 3D
points are copied to the output 3D object model, which is set with the parameter ’output_point_coord’. If
’output_xyz_mapping’ is set to ’false’, the image coordinate mapping is not copied.
List of values: ’true’, ’false’
Default: ’false’
’primitive_type’, ’fitting_algorithm’, ’min_radius’, ’max_radius’, ’output_point_coord’: These parameters are
used, if ’fitting’ is set to ’true’, which is the default. The meaning and the use of these parameters is de-
scribed with the operator fit_primitives_object_model_3d.
’surface_check’: The parameter determines whether the surface of a triangulated input object model is checked
regarding its conformity to the expected requirements. If the input 3D object model contains tri-
angles that are topologically invalid an error message is raised. If the triangulation was created
(triangulate_object_model_3d) or edited (e.g., by simplify_object_model_3d) by a HAL-
CON operator, a surface check should not be necessary. The check can be disabled in order to enhance the
runtime by setting ’surface_check’ to ’false’.
List of values: ’true’, ’false’
Default: ’true’
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the input 3D object model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Number of elements: GenParamName == GenParamValue
List of values: GenParamName ∈ {’max_orientation_diff’, ’max_curvature_diff’, ’min_area’,
’primitive_type’, ’fitting_algorithm’, ’min_radius’, ’max_radius’, ’output_point_coord’,
’output_xyz_mapping’, ’surface_check’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string / real / integer
Values of the generic parameters.
Number of elements: GenParamValue == GenParamName
Suggested values: GenParamValue ∈ {0.15, 0.05, 100, ’true’, ’false’, ’cylinder’, ’sphere’, ’plane’, ’all’,
’least_squares’, ’least_squares_huber’, ’least_squares_tukey’}
. ObjectModel3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the output 3D object model.
Result
segment_object_model_3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information
HALCON 24.11.1.0
206 CHAPTER 4 3D OBJECT MODEL
Depending on the properties of ObjectModel3D, the following values are possible for Attrib:
The following attributes are available:
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object models.
. Attrib (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Attributes the threshold is applied to.
Default: ’point_coord_z’
List of values: Attrib ∈ {’point_coord_x’, ’point_coord_y’, ’point_coord_z’, ’point_normal_x’,
’point_normal_y’, ’point_normal_z’, ’mapping_row’, ’mapping_col’, ’neighbor_distance’, ’num_neighbors’,
’num_neighbors_fast’}
. MinValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Minimum value for the attributes specified by Attrib.
Default: 0.5
. MaxValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Maximum value for the attributes specified by Attrib.
Default: 1.0
. ObjectModel3DThresholded (output_control) . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the reduced 3D object models.
Example
Result
select_points_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised. If the required points are missing in the object model, i.e., an empty object model is passed,
the error 9515 is raised.
Execution Information
4.4 Transformations
affine_trans_object_model_3d ( : : ObjectModel3D,
HomMat3D : ObjectModel3DAffineTrans )
HALCON 24.11.1.0
208 CHAPTER 4 3D OBJECT MODEL
Result
connection_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information
convex_hull_object_model_3d (
: : ObjectModel3D : ObjectModel3DConvexHull )
HALCON 24.11.1.0
210 CHAPTER 4 3D OBJECT MODEL
Result
convex_hull_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an
exception is raised.
Execution Information
’max_gap’: This parameter specifies the maximum gap size in pixels in the XYZ-images that are closed. Gaps
larger than this value will contain edges at their boundary, while gaps smaller than this value will not. This
suppresses edges around smaller patches that were not reconstructed by the sensor as well as edges at the
more distant part of a discontinuity. For sensors with very large resolutions, the value should be increased to
avoid spurious edges.
Default: 30.
’estimate_viewpose’: This parameter can be used to turn off the automatic viewpose estimation and set a manual
viewpoint.
Default: ’true’.
’viewpoint’: This parameter only has an effect when ’estimate_viewpose’ is set to ’false’. It specifies the viewpoint
from which the 3D data is seen. It is used to determine the viewing directions and edge directions. It defaults
to the origin ’0 0 0’ of the 3D data. If the projection center is at a different location, for example, if the 3D
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model whose edges should be computed.
. MinAmplitude (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Edge threshold.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’max_gap’, ’estimate_viewpose’, ’viewpoint’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’0 0 0’, 10, 30, 100, ’true’, ’false’}
. ObjectModel3DEdges (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
3D object model containing the edges.
Result
edges_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information
HALCON 24.11.1.0
212 CHAPTER 4 3D OBJECT MODEL
1. Acquire point clouds and transform them into a common coordinate system, for example using
register_object_model_3d_pair and register_object_model_3d_global.
2. If not already available, compute triangles or point normals for the point clouds using
triangulate_object_model_3d or surface_normals_object_model_3d. A triangu-
lation is more suitable if you have surfaces with many outliers or holes that should be closed. Otherwise, for
clean surfaces, you can work with normals.
3. Inspect the normals of the input models using visualize_object_model_3d with GenParamName
’disp_normals’ or dev_inspect_ctrl. The point or triangle normals have to be oriented consistently
towards the inside or outside of the object. Set NormalDirection accordingly to ’inwards’ or ’outwards’.
4. Specify the volume of interest in BoundingBox. To obtain a first guess for BoundingBox, use
get_object_model_3d_params with GenParamName set to ’bounding_box1’.
5. Specify an initial set of parameters: a rough Resolution (e.g., 1/100 of the diameter of the
BoundingBox), SurfaceTolerance at least a bit larger (e.g., 5*Resolution), MinThickness
as the minimum thickness of the object (if the input point clouds represent the object only from one side, set
it very high, so that the object is cut off at the BoundingBox), Smoothing set to 1.0.
6. Apply fuse_object_model_3d and readjust the parameters to improve the results with respect to quality
and runtime, see below. Use a Resolution just fine enough to make out the details of your object while
tuning the other parameters, in order to avoid long runtimes. Also consider using the additional parameters
in GenParamName.
Parameter Description
See the HDevelop example fuse_object_model_3d_workflow for an explanation how to fine-tune the
parameters for your application.
The input point clouds ObjectModel3D have to lie in a common coordinate system and add up to the initial
surface. Furthermore, they must contain triangles or point normals. If both attributes are present, normals are
used as a default due to speed advantages. If triangles should be used, use copy_object_model_3d to obtain
only point and triangle information. Surfaces with many outliers or holes to be closed should be used with a
triangulation, clean surfaces with normals. The point or triangle normals have to be oriented consistently towards
the inside or outside of the object.
NormalDirection is used to specify whether the point or triangle normals point ’inwards’ or ’outwards’. If
only one value is specified, it is applied to all input models. Otherwise, the number of values has to equal the
number of input models.
BoundingBox specifies the volume of interest to be taken into account for input and output. Note that points
outside the bounding box are discarded. Triangles of the input point cloud with a point outside the BoundingBox
are discarded, not clipped. The BoundingBox is specified as a tuple [x1,y1,z1,x2,y2,z2] assigning two
opposite corner points P1=[x1,y1,z1] and P2=[x2,y2,z2] of the rectangular cuboid (with edges parallel
to the coordinate axes). For a valid bounding box, P1 must be the point on the front lower left corner and P2 on
the back upper right corner of the bounding box, i.e., x1<x2, y1<y2 and z1<z2. Note that the operator will
try to produce a closed surface. If the input point clouds represent the object from only one point of view, one
wants the bounding box usually to cut off the unknown part, wherefore MinThickness should be set e.g., to
a value larger than or equal to the length of the diagonal of the bounding box (which can be obtained by using
get_object_model_3d_params with the parameter ’diameter_axis_aligned_bounding_box’). An object
cut off by a surface of the bounding box has no points at this specific surface, thus has a hole. Note also that you
may have to rotate the input point clouds in order make the bounding box cut off the unknown part in the right
place, since the edges of the bounding box are always parallel to the coordinate axes. This can be achieved e.g.,
using affine_trans_object_model_3d or rigid_trans_object_model_3d.
Resolution specifies the distance of neighboring grid points in each coordinate direction in the discretization
of the BoundingBox. Resolution is set in the same unit as used in ObjectModel3D. Too small values will
unnecessarily increase the runtime, so it is recommended to begin with a coarse resolution. Too large values will
lead to a reconstruction with high loss of details. Smoothing may need to be adapted when Resolution is
changed. Resolution should always be a bit smaller than SurfaceTolerance in order to avoid discretiza-
tion artifacts.
SurfaceTolerance specifies how much noise in the input point cloud should be combined to the surface from
its inside and outside. Sole exemption when SurfaceTolerance is larger than ’distance_in_front’, in that case
’distance_in_front’ determines the surface thickness to the front of the object. SurfaceTolerance is set in the
same unit as used in ObjectModel3D. Points in the interior of the object as specified by NormalDirection
(and also GenParamName=’angle_threshold’) are considered surely inside the object if their distance to the initial
surface exceeds SurfaceTolerance but is smaller than MinThickness. SurfaceTolerance always has
to be smaller than MinThickness. SurfaceTolerance should always be a bit larger than Resolution in
order to avoid discretization artifacts.
MinThickness specifies the thickness of the object in normal direction of the initial surfaces. MinThickness
is set in the same unit as used in ObjectModel3D. Points which are specified by NormalDirection (and
also GenParamName=’angle_threshold’) to be in the interior of the object are only considered as being inside if
their distance to the initial surface does not exceed MinThickness. Note that this can lead to a hollow part of
the object. MinThickness always has to be larger than SurfaceTolerance. For point clouds representing
the object from different sides, MinThickness is best set as the thickness of the objects narrowest part. Note
that the operator will try to produce a closed surface. If the input point clouds represent the object only from one
side, this parameter should be set very large, so that the object is cut off at the bounding box. The backside of the
objects is not observed and thus its reconstruction will probably be incorrect. If you observe several distinct objects
from only one side, you may want to reduce the parameter MinThickness to restrict the depth of reconstructed
objects and thus keep them from being smudged into one surface. Too small values can result in holes or double
walls in the fused point cloud. Too large values can result in a distorted point cloud or blow up the surface towards
the outside of the object (if the surface is blown up beyond the bounding box, no points will be returned).
o o o o
o o
c o
c c o o
c o
'distance_in_front' c o
s s o o
s o
s s s s s s s s
SurfaceTolerance s SurfaceTolerance s
'distance_in_front'
s s s s s s
s s s s s s s s s s s s
s SurfaceTolerance s s SurfaceTolerance s
s s s s
s s s s
i i
i i i i
i i i i
i i
i i i i
MinThickness i MinThickness i
i i i i i i
o o
o o o o
o o o o o o
o o
(1) (2)
Schematic view of the parameters SurfaceTolerance, MinThickness and the value ’distance_in_front’ with the
aid of an example surface (_.): o are points taken as outside, s are points of the surface, i are points surely inside
the object, c are points also considered for the evaluation of the surface. (1): ’distance_in_front’ smaller than
SurfaceTolerance (2): ’distance_in_front’ larger than SurfaceTolerance.
Smoothing determines how important a small total variation of the distance function is compared to data fidelity.
Thus, Smoothing regulates the ’jumpiness’ of the resulting surface. Note that the actual value of Smoothing
for a given data to result in an appropriate and visually pleasing surface has to be found by trial and error. Too
small values lead to integrating many outliers into the surface even if the surface then exhibits many jumps. Too
large values lead to lost fidelity towards the input point clouds (how the algorithm views distances to the input
point clouds depends heavily on SurfaceTolerance and MinThickness). Smoothing may need to be
adapted when Resolution is changed.
By setting GenParamName to the following values, the additional parameters can be set with GenParamValue:
’distance_in_front’ Points in the exterior of the object as specified by NormalDirection (and also
GenParamName=’angle_threshold’) are only considered as part of the object if their distance to the ini-
tial surface does not exceed ’distance_in_front’. This is the outside analogous to MinThickness of the
interior, except that ’distance_in_front’ does not have to be larger than SurfaceTolerance. In case ’dis-
tance_in_front’ is smaller than SurfaceTolerance it determines the surface thickness to the front. This
parameter is useful if holes in the surface should be closed along a jump in the surface (for example along
the viewing direction of the sensor). In this case, ’distance_in_front’ can be set to a small value in order
to avoid a wrong initialization of the distance field. ’distance_in_front’ is set in the same unit as used in
ObjectModel3D. ’distance_in_front’ should always be a bit larger than Resolution in order to avoid
discretization artifacts. Per default, ’distance_in_front’ is set to a value larger than the bounding box diameter,
therewith all points outside of the object in the bounding box are considered.
HALCON 24.11.1.0
214 CHAPTER 4 3D OBJECT MODEL
Fusion algorithm
The algorithm will produce a watertight, closed surface (which is maybe cut off at the BoundingBox). The goal
is to obtain a preferably smooth surface while keeping form fidelity. To this end, the bounding box is sampled and
each sample point is assigned an initial distance to a so-called isosurface (consisting of points with distance 0).
The final distance values (and thus the isosurface) are obtained by minimizing an error function based on fidelity
to the initial point clouds on the one hand and total variation (’jumpiness’) of the distance function on the other
hand. This leads to a fusion of the input point clouds (see paper in References below).
The calculation of the isosurface can be influenced with the parameters of the operator. The distance between
sample points in the bounding box (in each coordinate direction) can be set with the parameter Resolution.
Fidelity to the initial point clouds is grasped as the signed distances of sample points, lying on the grid, in the
bounding box to their nearest neighbors (points or triangles) on the input point clouds. Whether a sample point in
the bounding box is considered to lie outside or inside the object (the sign of the distance) is determined by the
normal of its nearest neighbor on the initial surface and the set NormalDirection. To determine if a sample
point is surely inside or outside the object with respect to an input point cloud, the distance to its nearest neighbor
on the initial surface is determined. A point on the inside is considered surely inside if the distance exceeds
SurfaceTolerance but not MinThickness, while a point on the outside counts as exteriorly if the distance
exceeds ’distance_in_front’.
Fidelity to the initial point clouds is only considered for those sample points lying within MinThickness inside
or within GenParamName ’distance_in_front’ outside the initial surface.
Furthermore, fidelity is not maintained for a given sample point lying outside a cone around GenParamName
’angle_threshold’. Thus it is not maintained if the line from the sample point to its nearest neighbor on the
initial surface differs from the surface normal of the nearest neighbor by an angle more than GenParamName
’angle_threshold’. Note that the distances to nearest neighboring triangles will often yield more satisfying results
while distances to nearest points can be calculated much faster.
The subsequent optimization of the distance values is the same as the one used in
reconstruct_surface_stereo with Method=’surface_fusion’.
The parameter Smoothing regulates the ’jumpiness’ of the distance function by weighing the two terms in the
error function: Fidelity to the initial point clouds on the one hand, total variation of the distance function on the
other hand. Note that the actual value of Smoothing for a given data set to be visually pleasing has to be found
by trial and error.
Each 3D point of the object model returned in ObjectModel3DFusion is extracted from the isosurface where
the distance function equals zero. Its normal vector is calculated from the gradient of the distance function. The so-
obtained point cloud can also be meshed using the algorithm ’marching tetrahedra’ by setting the GenParamName
’point_meshing’ to the GenParamValue ’isosurface’.
Troubleshooting
Please follow the workflow above. If the results are not satisfactory, please consult the following hints and ideas:
Quality of the input point clouds The input point clouds should represent the entire object surface. If point nor-
mals are used, the points should be dense on the entire surface, not only along edges of the object. In
particular, for CAD-data typically triangulation has to be used.
Used attribute Using triangles instead of point normals will typically yield results of higher quality. If
both attributes are present, point normals are used per default. If triangles should be used, use
copy_object_model_3d to obtain only point and triangle information.
Outliers If outliers of the input models disturb the output surface even for high values of Smoothing, try
to decrease GenParamName ’angle_threshold’. If wanted, outliers of the input models can also be re-
moved, for example using connection_object_model_3d. With reduced influence also modifying
GenParamName ’distance_in_front’ may help to reduce certain outliers.
Closing of holes If holes in the surface are not closed even for high values of Smoothing (for example a
jump in the surface along the viewing direction of the sensor), try to decrease GenParamName ’dis-
tance_in_front’. Enlarging GenParamName ’angle_threshold’ may help the algorithm to close the gap.
Note that triangulate_object_model_3d can close gaps when triangulating sensor data which con-
tains a 2D mapping.
Empty output If the output contains no point, try to decrease Smoothing. If there is no output even for very
low values of Smoothing, you may want to check if MinThickness is set too large and if the set
NormalDirection is correct.
Runtime
In order to improve the runtime, consider the following hints:
Extent of the bounding box The bounding box should be tight around the volume of interest. Else, the runtime
will increase drastically but without any benefit.
Resolution Enlarging the parameter Resolution will speed up the execution considerably.
Used attribute Using point normals instead of triangles will speed up the execution. If both, normals and triangles,
are present in the input models, normals are used per default.
Density of input point clouds The input point clouds can be thinned out using sample_object_model_3d
(if normals are used) or simplify_object_model_3d with GenParamName ’avoid_triangle_flips’
set to ’true’ (if triangles are used).
Distances to surface Make sure that MinThickness and GenParamName ’distance_in_front’ are not set un-
necessarily large, since this can slow down the preparation and distance computation.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handles of the 3D object models.
. BoundingBox (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
The two opposite bound box corners.
. Resolution (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Used resolution within the bounding box.
Default: 1.0
Suggested values: Resolution ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
. SurfaceTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Distance of expected noise to surface.
Default: 1.0
Suggested values: SurfaceTolerance ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
. MinThickness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Minimum thickness of the object in direction of the surface normal.
Default: 1.0
Suggested values: MinThickness ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
. Smoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Weight factor for data fidelity.
Default: 1.0
Suggested values: Smoothing ∈ {1.0, 1.1, 1.5, 10.0, 100.0}
HALCON 24.11.1.0
216 CHAPTER 4 3D OBJECT MODEL
Possible Predecessors
read_object_model_3d, register_object_model_3d_pair,
register_object_model_3d_global, surface_normals_object_model_3d,
triangulate_object_model_3d, simplify_object_model_3d,
get_object_model_3d_params
Possible Successors
write_object_model_3d, create_surface_model
See also
reconstruct_surface_stereo
References
C. Zach, T. Pock, and H. Bischof: “A globally optimal algorithm for robust TV-L1 range image integration.”
Proceedings of IEEE International Conference on Computer Vision (ICCV 2007).
Module
3D Metrology
intersect_plane_object_model_3d ( : : ObjectModel3D,
Plane : ObjectModel3DIntersection )
This operator supports parameter broadcasting. This means that each parameter can be given as a tuple of length
1 (7 for Plane) or N (N*7 for Plane). Parameters with tuple length 1 (7 for Plane) will be repeated internally
such that the number of computed output models is always N.
Parameters
Result
intersect_plane_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information
HALCON 24.11.1.0
218 CHAPTER 4 3D OBJECT MODEL
’cartesian’: First, each point is transformed into the camera coordinate system using the given Pose. Then,
these coordinates are projected into the image coordinate system based on the internal camera parameters
CamParam.
The internal camera parameters CamParam describe the projection characteristics of the camera (see Cali-
bration). The Pose is in the form ccs Pmcs , where ccs denotes camera coordinate system and mcs the model
coordinate system (which is a 3D world coordinate system), see Transformations / Poses and “Solution
Guide III-C - 3D Vision”. Hence, it describes the position and orientation of the model coordinate
system relative to the camera coordinate system.
The X-, Y-, and Z-coordinates of the transformed point are written into the corresponding image at the
position of the projection. If multiple points are projected to the same image coordinates, the point with the
smallest Z-value is written (hidden-point removal). The dimensions of the returned images are defined by the
camera parameters.
The returned images show the object as it would look like when seeing it with the specified camera under the
specified pose.
’cartesian_faces’: In order to use this transformation, the input 3D object models need to contain faces (tri-
angles or polygons), otherwise, the 3D object model without faces is disregarded. Note that if the 3D
object models have polygon faces, those are converted internally to triangles. This conversion can be
done beforehand to speed up this operator. For this, read_object_model_3d can be called with
the GenParamName ’convert_to_triangles’ set to ’true’, to convert all faces to triangles. Alternatively,
triangulate_object_model_3d can be called prior to this operator.
First, each face of the 3D object models ObjectModel3D is transformed into the camera coordinate system
using the given Pose. Then, these coordinates are projected into the image coordinate system based on the
internal camera parameters CamParam, while keeping the 3D information (X-, Y-, and Z-coordinates) for
each of those pixels. For a more detailed explanation of CamParam and Pose please refer to the section
’cartesian’. If multiple faces are projected to the same image coordinates, the value with the smallest Z-
value is written (hidden-point removal). The dimensions of the returned images are defined by the camera
parameters.
The returned images show the objects as they would look like when seeing them with the specified camera
under the specified pose.
In case that OpenGL 2.1, GLSL 1.2, and the OpenGL extensions GL_EXT_framebuffer_object and
GL_EXT_framebuffer_blit are available, speed increases.
This Type can be used to create 3D object models containing 2D mapping data, by creating a 3D ob-
ject model from the returned images using xyz_to_object_model_3d. Note that in many cases, it is
recommended to use the 2D mapping data, if available, for speed and robustness reasons. This is beneficial
for example when using sample_object_model_3d, surface_normals_object_model_3d, or
when preparing a 3D object model for surface-based matching, e.g., smoothing, removing outliers, and re-
ducing the domain.
’cartesian_faces_no_opengl’: This transformation mode works in the same way as the method ’cartesian_faces’
but does not use OpenGL. In general, ’cartesian_faces’ automatically determines if OpenGL is available.
Thus, it is usually not required to use ’cartesian_faces_no_opengl’ explicitly. It can make sense, however,
to use it in cases where the automatic mode selection does not work due to, for example, driver issues with
OpenGL.
’from_xyz_map’: This transformation mode works only if the 3D object model was created with the operator
xyz_to_object_model_3d. It writes each 3D point to the image coordinate where it originally came
from, using the mapping attribute that is stored within the 3D object model.
The parameters CamParam and Pose are ignored. The dimensions of the returned images are equal to
the dimensions of the original images that were used with xyz_to_object_model_3d to create the 3D
object model and can be queried from get_object_model_3d_params with ’mapping_size’.
This transformation mode is faster than ’cartesian’. It is suitable, e.g., to visualize the results of a segmenta-
tion done with segment_object_model_3d.
Attention
Cameras with hypercentric lenses are not supported. For displaying large faces with a non-zero distortion in
CamParam, note that the distortion is only applied to the points of the model. In the projection, these points are
subsequently connected by straight lines. For a good approximation of the distorted lines, please use a triangulation
with sufficiently small triangles.
Parameters
. X (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the X-Coordinates of the 3D points.
. Y (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the Y-Coordinates of the 3D points.
. Z (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Image with the Z-Coordinates of the 3D points.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model.
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the conversion.
Default: ’cartesian’
List of values: Type ∈ {’cartesian’, ’cartesian_faces’, ’from_xyz_map’, ’cartesian_faces_no_opengl’}
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the 3D object model.
Number of elements: Pose == 0 || Pose == 7 || Pose == 12
Result
The operator object_model_3d_to_xyz returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information
HALCON 24.11.1.0
220 CHAPTER 4 3D OBJECT MODEL
coordinates and an attribute with the mapping from the point coordinates to image coordinates. Only points
originating from neighboring pixels are triangulated. Additionally, holes in the image region can be filled
with a Delaunay triangulation (see ’max_area_holes’ below). Only holes which are completely surrounded
by the image region are closed.
’distance_computation’: The 3D object model is prepared to be used in distance_object_model_3d.
’gen_xyz_mapping’: The XYZ-mapping information of a 3D object model containing an ordered point cloud is
computed, i.e. image coordinates are assigned for each 3D point. For this, either the generic parameter
’xyz_map_width’ or ’xyz_map_height’ must be set, to indicate whether the point cloud is ordered row-wise
or column-wise and define the image dimensions (see ’xyz_map_width’ and ’xyz_map_height’ below).
Note that in many cases, it is recommended to use the 2D mapping data, if available, for speed
and robustness reasons. This is beneficial especially when using sample_object_model_3d,
surface_normals_object_model_3d, or when preparing a 3D object model for surface-based
matching, e.g., smoothing, removing outliers, and reducing the domain.
The parameter OverwriteData defines if the existing data of an already prepared 3D object model shall be
removed. If OverwriteData is set to ’true’, the prepared data, defined with the parameter Purpose, is over-
written. If OverwriteData is set to ’false’, the prepared data is not overwritten. If there is no prepared data
OverwriteData is ignored and data is saved in a 3D object model. The parameter OverwriteData can be
used for choosing another set of generic parameters GenParamName and GenParamValue. The parameter
OverwriteData has no influence if the parameter Purpose is set to ’shape_based_matching_3d’, because for
that, there are no generic parameters to set.
The generic parameters can optionally be used to influence the preparation. If desired, these parameters and
their corresponding values can be specified by using GenParamName and GenParamValue, respectively. The
following values for GenParamName are possible:
’max_area_holes’: This parameter is only valid if Purpose is set to ’segmentation’. The parameter specifies
which area holes of the point coordinates are closed during a simple Delaunay triangulation. Only holes
which are completely surrounded by the image region are closed. If ’max_area_holes’ is set to 0, no holes
are triangulated. If the parameter ’max_area_holes’ is set greater or equal than 1 pixel, the holes with an area
less or equal than ’max_area_holes’ are closed by a meshing.
Suggested values: 1, 10, 100.
Default: 10.
’distance_to’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter specifies
the type of data to which the distance shall be computed to. It is described in more detail in the documentation
of distance_object_model_3d.
List of values: ’auto’, ’triangles’, ’points’, ’primitive’.
Default: ’auto’.
’method’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter specifies
the method to be used for the distance computation. It is described in more detail in the documentation of
distance_object_model_3d.
List of values: ’auto’, ’kd-tree’, ’voxel’, ’linear’.
Default: ’auto’.
’max_distance’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter speci-
fies the maximum distance of interest for the distance computation. If it is set to 0, no maximum distance is
used. It is described in more detail in the documentation of distance_object_model_3d.
Suggested values: 0, 0.1, 1, 10.
Default: 0.
’sampling_dist_rel’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter
specifies the relative sampling distance when computing the distance to triangles with the method ’voxel’. It
is described in more detail in the documentation of distance_object_model_3d.
Suggested values: 0.03, 0.01.
Default: 0.03.
’sampling_dist_abs’: This parameter is only valid if Purpose is set to ’distance_computation’. The parameter
specifies the absolute sampling distance when computing the distance to triangles with the method ’voxel’. It
is described in more detail in the documentation of distance_object_model_3d.
Suggested values: 1, 5, 10.
Default: None.
’xyz_map_width’: This parameter is only valid if Purpose is set to ’gen_xyz_mapping’. The parameter in-
dicates that the point cloud is ordered row-wise and the passed value is used as the width of the image.
The height of the image is calculated automatically. Only one of the two parameters ’xyz_map_width’ and
’xyz_map_height’ can be set.
Default: None.
’xyz_map_height’: This parameter is only valid if Purpose is set to ’gen_xyz_mapping’. The parameter indi-
cates that the point cloud is ordered column-wise and the passed value is used as the height of the image.
The width of the image is calculated automatically. Only one of the two parameters ’xyz_map_width’ and
’xyz_map_height’ can be set.
Default: None.
Parameters
Result
The operator prepare_object_model_3d returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information
HALCON 24.11.1.0
222 CHAPTER 4 3D OBJECT MODEL
’data’: This parameter specifies which geometric data of the 3D object model should be projected. If ’data’ is
set to ’faces’, the faces of the 3D object model are projected. The faces are represented by their border
lines in ModelContours. If ’data’ is set to ’lines’, the 3D lines of the 3D object model are projected.
If ’data’ is set to ’points’, the points of the 3D object model are projected. The projected points can be
represented in ModelContours in different ways. The point representation can be selected by using the
generic parameter ’point_shape’ (see below). Finally, if ’data’ is set to ’auto’, HALCON automatically
chooses the most descriptive geometry data that is available in the 3D object model for visualization.
List of values: ’auto’, ’faces’, ’lines’, ’points’.
Default: ’auto’.
’point_shape’: This parameter specifies how points are represented in the output contour ModelContours.
Consequently, this parameter only has an effect if the points of the 3D object model are selected for projection
(see above). If ’point_shape’ is set to ’circle’, points are represented by circles, whereas if ’point_shape’ is
set to ’cross’, points are represented by crosses. In both cases the size of the points (i.e., the size of the circles
or the size of the crosses) can be specified by the generic parameter ’point_size’ (see below). The orientation
of the crosses can be specified by the generic parameter ’point_orientation’ (see below).
List of values: ’circle’, ’cross’.
Default: ’circle’.
’point_size’: This parameter specifies the size of the point representation in the output contour ModelContours,
i.e., the size of the circles or the size of the crosses depending on the selected ’point_shape’. Consequently,
this parameter only has an effect if the points of the 3D object model are selected for projection (see above).
The size must be given in pixel units. If ’point_size’ is set to 0, each point is represented by a contour that
contains a single contour point.
Suggested values: 0, 2, 4.
Default: 4.
’point_orientation’: This parameter specifies the orientation of the crosses in radians. Consequently, this parame-
ter only has an effect if the points of the 3D object model are selected for projection and ’point_shape’ is set
to ’cross’ (see above).
Suggested values: 0, 0.39, 0.79.
Default: 0.79.
’union_adjacent_contours’: This parameter specifies if adjacent projected contours should be joined or not. Ac-
tivating this option is equivalent to calling union_adjacent_contours_xld after this operator, but
significantly faster.
Attention
Cameras with hypercentric lenses are not supported. For displaying large faces with a non-zero distortion in
CamParam, note that the distortion is only applied to the points of the model. In the projection, these points are
subsequently connected by straight lines. For a good approximation of the distorted lines, please use a triangulation
with sufficiently small triangles.
Parameters
. ModelContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Projected model contours.
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the 3D object model.
. CamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: Pose == 7
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Name of the generic parameter.
Default: []
List of values: GenParamName ∈ {’true’, ’false’, ’hidden_surface_removal’, ’min_face_angle’, ’data’,
’point_shape’, ’point_size’, ’point_orientation’, ’union_adjacent_contours’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer / real
Value of the generic parameter.
Default: []
Suggested values: GenParamValue ∈ {0.17, 0.26, 0.35, 0.52, ’true’, ’false’, ’auto’, ’points’, ’faces’,
’lines’, ’circle’, ’cross’, 1, 2, 3, 4, 0.785398}
Result
project_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an excep-
tion is raised. If the geometric data that was selected for the projection is not available in the 3D object model, the
error 9514 is raised.
Execution Information
HALCON 24.11.1.0
224 CHAPTER 4 3D OBJECT MODEL
projective_trans_object_model_3d ( : : ObjectModel3D,
HomMat3D : ObjectModel3DProjectiveTrans )
Possible Predecessors
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
project_object_model_3d, object_model_3d_to_xyz
See also
affine_trans_point_3d, rigid_trans_object_model_3d,
affine_trans_object_model_3d
Module
3D Metrology
Improve the relative transformations between 3D object models based on their overlaps.
register_object_model_3d_global improves the relative transformations between 3D object models,
which is called global registration. In particular, under the assumption that all input 3D objects models in
ObjectModels3D have a known approximated spatial relation, all possible pairwise overlapping areas are calcu-
lated and optimized for a better alignment. The resulting offset is then synchronously minimized for all pairs. The
entire process is then repeated iteratively from the newly resulting starting poses. The result in HomMats3DOut
describes a transformation that can be applied with affine_trans_object_model_3d to the input 3D ob-
ject models to transform all in a common reference frame. Scores contains for every 3D object model the number
of found neighbors with a sufficient overlap. If no overlap is found for at least one object, an exception is raised.
Three types for the interpretation of the starting poses in HomMats3D are available, which is controlled by the
parameters From and To:
First, if From is set to ’global’, the parameter HomMats3D must contain a rigid transformation with 12 entries for
each 3D object model in ObjectModels3D that describes its position in relation to a common global reference
frame. In this case, To must be empty. This case is suitable, e.g., if transformations are applied by a turning table
or a robot to either the camera or the object. In this case, all neighborhoods that are possible are considered for the
global optimization.
Second, if From is set to ’previous’, the parameter HomMats3D must contain a rigid transformation for
each subsequent pair of 3D object models in ObjectModels3D (one less than for the first case). An
example for this situation might be a matching applied consecutively to the previous frame (e.g., with
register_object_model_3d_pair). To must be empty again. In this case, all neighborhoods that are
possible are considered for the global optimization.
Third, you can describe any transformation in HomMats3D by setting From and To to the indices of the 3D
object models for which the corresponding transformation is valid. That is, a given transformation describes the
transformation that is needed to move the 3D object model with the index that is specified in From into the
coordinate system of the 3D object model with the corresponding index that is specified in To. In this case,
HomMats3D should contain all possible neighborhood relations between the objects, since no other than these
neighborhoods are considered for the optimization. Please consider, that for all 3D object models at least one path
of transformations to each other 3D object model must be contained in the such specified transformations.
If ObjectModels3D contains 3D-primitives, they will internally be transformed into point clouds and will be
considered as such.
The accuracy of the returned poses is limited to around 0.1% of the size of the point clouds due to numerical
reasons. The accuracy further depends on the noise of the data points, the number of data points and the shape of
the point clouds.
The process of the global registration can be controlled further by the following generic parameters in
GenParamName and GenParamValue:
’default_parameters’: Allows to choose between two default parameter sets, i.e., it allows to switch between a
’fast’ and an ’accurate’ set of parameters.
List of values: ’fast’, ’accurate’.
Default: ’accurate’.
’rel_sampling_distance’: The relative sampling rate of the 3D object models. This value is relative to the object’s
diameter and refers to the minimal distance between two sampled points. A higher value leads to faster
results, whereas a lower value leads to more accurate results.
Suggested values: 0.03, 0.05, 0.07.
Default: 0.05 (’default_parameters’ = ’accurate’), 0.07 (’default_parameters’ = ’fast’).
Restriction: 0 < ’rel_sampling_distance’ < 1
’pose_ref_sub_sampling’: Number of points that are skipped for the pose refinement. The value specifies the
number of points that are skipped per selected point. Increasing this value allows faster convergence at the
cost of less accurate results. The internally used method for the refinement is asymmetric and this parameter
only affects the second model of each tested pair.
Suggested values: 1, 2, 20.
Default: 2 (’default_parameters’ = ’accurate’), 10 (’default_parameters’ = ’fast’).
Restriction: ’pose_ref_sub_sampling’ > 0
HALCON 24.11.1.0
226 CHAPTER 4 3D OBJECT MODEL
’max_num_iterations’: Number of iterations applied to adjust the initial alignment. The better the initial alignment
is, the less iterations are necessary.
Suggested values: 1, 3, 10.
Default: 3.
Parameters
. ObjectModels3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handles of several 3D object models.
. HomMats3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real / integer
Approximate relative transformations between the 3D object models.
. From (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string / integer
Type of interpretation for the transformations.
Default: ’global’
List of values: From ∈ {’global’, ’previous’, 0, 1, 2, 3, 4}
. To (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Target indices of the transformations if From specifies the source indices, otherwise the parameter must be
empty.
Default: []
List of values: To ∈ {0, 1, 2, 3, 4}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters that can be adjusted for the global 3D object model registration.
Default: []
List of values: GenParamName ∈ {’default_parameters’, ’rel_sampling_distance’,
’pose_ref_sub_sampling’, ’max_num_iterations’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer / string
Values of the generic parameters that can be adjusted for the global 3D object model registration.
Default: []
Suggested values: GenParamValue ∈ {0.03, 0.05, 0.07, 0.1, 0.25, 0.5, 1, 2, 5, 10, 20, ’fast’, ’accurate’}
. HomMats3DOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d-array ; real / integer
Resulting Transformations.
. Scores (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Number of overlapping neighbors for each 3D object model.
Result
register_object_model_3d_global returns 2 (H_MSG_TRUE) if all parameters are correct. If neces-
sary, an exception is raised.
Execution Information
register_object_model_3d_pair ( : : ObjectModel3D1,
ObjectModel3D2, Method, GenParamName, GenParamValue : Pose,
Score )
’default_parameters’: To allow an easy control over the parameters, three different sets of parameters are available.
Selecting the ’fast’ parameter set allows a shorter calculation time. ’accurate’ will give more accurate results.
’robust’ additionally improves the quality of the resulting Score at the cost of calculation time.
List of values: ’fast’, ’accurate’, ’robust’.
Default: ’accurate’.
’rel_sampling_distance’: This parameter controls the relative sampling rate of the 3D object models that is used
to represent the surfaces for the computation. This value is relative to the diameter of the respective object
and defines the minimal distance between two sampled points. A higher value will lead to faster and a
lower value to more accurate results. This parameter can also be set for each object independently by using
’rel_sampling_distance_obj1’ and ’rel_sampling_distance_obj2’.
Suggested values: 0.03, 0.05, 0.07.
Default: 0.05.
’key_point_fraction’: This parameter controls the ratio of sampled points that are considered as key points for the
matching process. The number is relative to the sampled points of the model. Reducing this ratio speeds up
the process, whereas increasing leads to more robust results. This parameter can be also set for each object
independently by using ’key_point_fraction_obj1’ and ’key_point_fraction_obj2’.
Suggested values: 0.2, 0.3, 0.4.
Default: 0.3.
’pose_ref_num_steps’: The number of iterative steps used for the pose refinement.
Suggested values: 5, 7, 10.
Default: 5.
’pose_ref_sub_sampling’: Number of points that are skipped for the pose refinement. The value specifies the
number of points that are skipped per selected point. Increasing this value allows faster convergence at the
cost of less accurate results. This parameter is only relevant for the smaller of the two objects.
Suggested values: 1, 2, 20.
Default: 2.
’pose_ref_dist_threshold_rel’: Maximum distance that two faces might have to be considered as potentially over-
lapping. This value is relative to the diameter of the larger object.
Suggested values: 0.05, 0.1, 0.15.
Default: 0.1.
’pose_ref_dist_threshold_abs’: Maximum distance that two faces might have to be considered as potentially over-
lapping, as absolute value.
’model_invert_normals’: Invert the normals of the smaller object, if its normals are inverted relative to the other
object.
List of values: ’true’, ’false’.
Default: ’false’.
HALCON 24.11.1.0
228 CHAPTER 4 3D OBJECT MODEL
Parameters
. ObjectModel3D1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the first 3D object model.
. ObjectModel3D2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the second 3D object model.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method for the registration.
Default: ’matching’
List of values: Method ∈ {’matching’, ’icp’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’default_parameters’, ’rel_sampling_distance’,
’rel_sampling_distance_obj1’, ’rel_sampling_distance_obj2’, ’key_point_fraction’,
’key_point_fraction_obj1’, ’key_point_fraction_obj2’, ’pose_ref_num_steps’, ’pose_ref_sub_sampling’,
’pose_ref_dist_threshold_rel’, ’pose_ref_dist_threshold_abs’, ’model_invert_normals’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’fast’, ’accurate’, ’robust’, 0.1, 0.25, 0.5, 1, ’true’, ’false’}
. Pose (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose to transform ObjectModel3D1 in the reference frame of ObjectModel3D2.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Overlapping of the two 3D object models.
Example
Result
register_object_model_3d_pair returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary,
an exception is raised.
Execution Information
Possible Predecessors
read_object_model_3d, gen_object_model_3d_from_points, xyz_to_object_model_3d
Possible Successors
register_object_model_3d_global, affine_trans_object_model_3d,
union_object_model_3d
See also
register_object_model_3d_global, find_surface_model
Module
3D Metrology
• Multithreading type: mutually exclusive (runs in parallel with other non-exclusive operators, but not with
itself).
• Multithreading scope: global (may be called from any thread).
• Processed without parallelization.
Possible Predecessors
find_surface_model, fit_primitives_object_model_3d, segment_object_model_3d,
read_object_model_3d, xyz_to_object_model_3d
Possible Successors
disp_obj
See also
disp_object_model_3d, project_shape_model_3d, object_model_3d_to_xyz
Module
3D Metrology
HALCON 24.11.1.0
230 CHAPTER 4 3D OBJECT MODEL
rigid_trans_object_model_3d ( : : ObjectModel3D,
Pose : ObjectModel3DRigidTrans )
distance or the number of points in SampledObjectModel3D. The created 3D object model is returned in
SampledObjectModel3D.
Using sample_object_model_3d is recommended if complex point clouds are to be thinned out for
faster postprocessing or if primitives are to be converted to point clouds. Note that if the 3D object
model is triangulated and should be simplified by preserving its original geometry as good as possible,
simplify_object_model_3d should be used instead.
If the input object model ObjectModel3D contains only points, several sampling methods are available which
can be selected using the parameter Method:
’fast’: The default method ’fast’ adds all points from the input model which are not closer than SamplingParam
to any point that was earlier added to the output model. If present, normals, XYZ-mapping and extended point
attributes are copied to the output model.
’fast_compute_normals’: The method ’fast_compute_normals’ selects the same points as the method ’fast’, but
additionally calculates the normals for all points that were selected. For this, the input object model must
either contain normals, which are copied, or it must contain a XYZ-mapping attribute from which the normals
are computed. The z-component of the calculated normal vectors is always positive. The XYZ-mapping is
created by xyz_to_object_model_3d.
’accurate’: The method ’accurate’ goes through the points of the 3D object model ObjectModel3D and cal-
culates whether any other points are within a sphere with the radius SamplingParam around the ex-
amined point. If there are no other points, the original point is stored in SampledObjectModel3D.
If there are other points, the center of gravity of these points (including the original point) is stored in
SampledObjectModel3D. This procedure is repeated with the remaining points until there are no points
left. Extended attributes of the input 3D object model are not copied, but normals and XYZ-mapping
are copied. For this method, a noise removal is possible by specifying a value for ’min_num_points’ in
GenParamName and GenParamValue, which removes all interpolated points that had less than the spec-
ified number of neighbor points in the original model.
’accurate_use_normals’: The method ’accurate_use_normals’ requires normals in the input 3D object model and
interpolates only points with similar normals. The similarity depends on the angle between the normals. The
threshold of the angle can be specified in GenParamName and GenParamValue with ’max_angle_diff’.
The default value is 180 degrees. Additionally, outliers can be removed as described in the method ’accurate’,
by setting the generic parameter ’min_num_points’.
’xyz_mapping’: The method ’xyz_mapping’ can only be applied to 3D object models that contain an XYZ-
mapping (for example, if it was created using xyz_to_object_model_3d). This mapping stores for
each 3D point its original image coordinates. The method ’xyz_mapping’ subdivides those original images
into squares with side length SamplingParam (which is given in pixel) and selects one 3D point per square.
The method behaves similar to applying zoom_image_factor onto the original XYZ-images. Note that
this method does not use the 3D-coordinates of the points for the point selection, only their 2D image coor-
dinates.
It is important to notice that for this method, the parameter SamplingParam corresponds to a distance in
pixels, not to a distance in 3D space.
’xyz_mapping_compute_normals’: The method ’xyz_mapping_compute_normals’ selects the same points as the
method ’xyz_mapping’, but additionally calculates the normals for all points that were selected. The z-
component of the normal vectors is always positive. If the input object model contains normals, those normals
are copied to the output. Otherwise, the normals are computed based on the XYZ-mapping.
’furthest_point’: The method ’furthest_point’ iteratively adds the point of the input object to the output object that
is furthest from all points already added to the output model. This usually leads to a reasonably uniform
sampling. For this method, the desired number of points in the output model is passed in SamplingParam.
If that number exceeds the number of points in the input object, then all points of the input object are returned.
The first point added to the output object is the point that is furthest away from the center of the axis aligned
bounding box around the points of the input object.
’furthest_point_compute_normals’: The method ’furthest_point_compute_normals’ selects the same points as the
method ’furthest_point’, but additionally calculates the normals for all points that were selected. The number
of desired points in the output object is passed in SamplingParam.
To compute the normals, the input object model must either contain normals, which are copied, or it must
contain a XYZ-mapping attribute from which the normals are computed. The z-component of the calculated
normal vectors is always positive. The XYZ-mapping is created by xyz_to_object_model_3d.
HALCON 24.11.1.0
232 CHAPTER 4 3D OBJECT MODEL
If the input object model contains faces (triangles or polygons) or is a 3D primitive, the surface is sampled with the
given distance. In this case, the method specified in Method is ignored. The directions of the computed normals
depend on the face orientation of the model. Usually, the orientation of the faces does not vary within one CAD
model, which results in a set of normals that is either pointing inwards or outwards. Note that planes and cylinders
must have finite extent. If the input object model contains lines, the lines are sampled with the given distance
SamplingParam.
The sampling process approximates surfaces by creating new points in the output object model. Therefore, any
extended attributes from the input object model are discarded.
For mixed input object models, the sampling priority is (from top to bottom) faces, lines, primitives and points,
i.e., only the objects of the highest priority are sampled.
The parameter SamplingParam accepts either one value, which is then used for all 3D object models passed in
ObjectModel3D, or one value per input object model. If SamplingParam is a distance in 3D space the unit
is the usual HALCON-internal unit ’m’.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model to be sampled.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selects between the different subsampling methods.
Default: ’fast’
List of values: Method ∈ {’fast’, ’fast_compute_normals’, ’accurate’, ’accurate_use_normals’,
’xyz_mapping’, ’xyz_mapping_compute_normals’, ’furthest_point’, ’furthest_point_compute_normals’}
. SamplingParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Sampling distance or number of points.
Number of elements: SamplingParam == 1 || SamplingParam == ObjectModel3D
Default: 0.05
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters that can be adjusted.
Default: []
List of values: GenParamName ∈ {’min_num_points’, ’max_angle_diff’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer / string
Values of the generic parameters that can be adjusted.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 5, 10, 20, 0.1, 0.25, 0.5}
. SampledObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the 3D object model that contains the sampled points.
Number of elements: SampledObjectModel3D == ObjectModel3D
Example
Result
sample_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception
is raised.
Execution Information
Possible Predecessors
read_object_model_3d, gen_plane_object_model_3d, gen_sphere_object_model_3d,
gen_cylinder_object_model_3d, gen_box_object_model_3d,
gen_sphere_object_model_3d_center, xyz_to_object_model_3d
Possible Successors
get_object_model_3d_params, clear_object_model_3d
Alternatives
simplify_object_model_3d, smooth_object_model_3d
Module
3D Metrology
’percentage_remaining’ (default): Amount specifies the percentage of points of the input object model that
should be contained in the output object model.
Value range: [0.0 ... 100.0].
’percentage_to_remove’: Amount specifies the percentage of points of the input object model that should be
removed.
Value range: [0.0 ... 100.0].
’num_points_remaining’: Amount specifies the number of points of the input object model that should be con-
tained in the output object model.
Value range: [0 ... number of points in the input object model].
’num_points_to_remove’: Amount specifies the number of points of the input object model that should be re-
moved.
Value range: [0 ... number of points in the input object model].
Sometimes triangular meshes flip during the simplification, i.e., the direction of their normal vectors changes by
180 degrees. This especially happens for artificially created CAD models that consist of planar parts. To avoid this
flipping, the generic parameter ’avoid_triangle_flips’ can be set to ’true’ (the default is ’false’). Note that in this
case, the run-time of simplify_object_model_3d will increase.
HALCON 24.11.1.0
234 CHAPTER 4 3D OBJECT MODEL
Note that multiple calls of simplify_object_model_3d with a lower degree of simplification might re-
sult in a different simplified object model compared to a single call with a higher degree of simplification.
Also note that isolated (i.e., non-triangulated) points will be removed. This might result in a number of points
in SimplifiedObjectModel3D that slightly deviates from the degree of simplification that is specified in
Amount.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model that should be simplified.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method that should be used for simplification.
Default: ’preserve_point_coordinates’
List of values: Method ∈ {’preserve_point_coordinates’}
. Amount (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Degree of simplification (default: percentage of remaining model points).
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’amount_type’, ’avoid_triangle_flips’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string / real
Values of the generic parameters.
Default: []
Suggested values: GenParamValue ∈ {’percentage_remaining’, ’percentage_to_remove’,
’num_points_remaining’, ’num_points_to_remove’, ’true’, ’false’}
. SimplifiedObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . object_model_3d(-array) ; handle
Handle of the simplified 3D object model.
Example
Result
simplify_object_model_3d returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an ex-
ception is raised.
Execution Information
Possible Predecessors
prepare_object_model_3d, read_object_model_3d, triangulate_object_model_3d,
xyz_to_object_model_3d
Possible Successors
disp_object_model_3d, smallest_bounding_box_object_model_3d
Alternatives
sample_object_model_3d, smooth_object_model_3d
References
Michael Garland, Paul S. Heckbert: Surface Simplification Using Quadric Error Metrics, Proceedings of the 24th
Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’97), 209-216, ACM Press,
1997
Module
3D Metrology
kP 0 − P k2
w(P 0 ) = exp −
σ2
The point is then projected on the surface. This process is repeated for all points resulting in a smoothed point
set. The fitted surfaces have well defined normals (i.e., they can easily be computed from the surface parameters).
Therefore, the points are augmented by the corresponding normals as side effect of the smoothing.
Additional parameters can be adjusted for the MLS smoothing specifically using the following parameter names
and values for GenParamName and GenParamValue:
’mls_kNN’: Specify the number of nearest neighbors k that are used to fit the MLS surface to each point.
Suggested values: 40, 60, 80, 100, 400.
Default: 60.
’mls_order’: Specify the order of the MLS polynomial surface. For ’mls_order’=1 the surface is a plane.
Suggested values: 1, 2,3.
Default: 2.
’mls_abs_sigma’: Specify the weighting parameter σ as a fixed absolute value in meter. The value to be selected
depends on the scale of the point data. As a rule of thumb, σ can be selected to be the typical distance
between a point P and its k/2-th neighbor Pk/2 . Note that setting an absolute weighting parameter for point
data with varying density might result in different smoothing results for points that are situated in parts of the
point data with different densities. This problem can be avoided by using ’mls_relative_sigma’ instead that is
scale independent, which makes it also a more convenient way to specify the neighborhood weighting. Note
that if ’mls_abs_sigma’ is passed, any value set in ’mls_relative_sigma’ is ignored.
Suggested values: 0.0001, 0.001, 0.01, 0.1, 1.0.
’mls_relative_sigma’: Specify a multiplication factor σrel that is used to compute σP for a point P by the formula:
σP = σrel kPk/2 − P k,
where Pk/2 is the k/2-th neighbor of P . Note that, unlike σ, which is a global parameter for all points, σP
is computed for each point P and therefore adapts the weighting function to its neighborhood. This avoids
HALCON 24.11.1.0
236 CHAPTER 4 3D OBJECT MODEL
problems that might appear while trying to set a global parameter σ (’mls_abs_sigma’) to a point data with
highly varying point density. Note however that if ’mls_abs_sigma’ is set, ’mls_relative_sigma’ is ignored.
Suggested values: 0.1, 0.5, 1.0, 1.5, 2.0.
Default: 1.0.
’mls_force_inwards’: If this parameter is set to ’true’, all surface normals are oriented such that they point “in
the direction of the origin”. Expressed mathematically, it is ensured that the scalar product between the
normal vector and the vector from the respective surface point to the origin is positive. This may be nec-
essary if the resulting SmoothObjectModel3D is used for surface-based matching, either as model in
create_surface_model or as 3D scene in find_surface_model, because here, the consistent orientation
of the normals is important for the matching process. If ’mls_force_inwards’ is set to ’false’, the normal
vectors are oriented arbitrarily.
List of values: ’true’, ’false’.
Default: ’true’.
2D mapping smoothing
By selecting Method=’xyz_mapping’ or Method=’xyz_mapping_compute_normals’, the coordinates of the 3D
points are smoothed using a 2D filter and the 2D mapping contained in ObjectModel3D. Additionally, for
Method=’xyz_mapping_compute_normals’, SmoothObjectModel3D is extended by normals computed from
the XYZ-mapping. If no 2D mapping is available, an exception is raised. As the filter operates on the 2D depth
image, using Method=’xyz_mapping’ or Method=’xyz_mapping_compute_normals’ is usually faster than using
Method=’mls’. Invalid points (e.g., duplicated points with coordinates [0,0,0]) should be removed from the 3D
object model before applying the operator, e.g., by using select_points_object_model_3d with attribute
’point_coord_z’ or ’num_neighbors_fast X’.
Additional parameters can be adjusted for the 2D mapping smoothing specifically using the following parameter
names and values for GenParamName and GenParamValue:
’xyz_mapping_filter’: Specify the filter used for smoothing the 2D mapping. The sizes of the corresponding filter
mask are set with ’xyz_mapping_mask_width’ and ’xyz_mapping_mask_height’.
In the default filter mode ’median_separate’, the filter method used on the 2D image is comparable to
median_separate. This mode is usually faster than ’median’, but can also lead to less accurate results
or artifacts at surface discontinuities.
Using filter mode ’median’, the used filter method is comparable to median_image.
List of values: ’median_separate’, ’median’.
Default: ’median_separate’.
’xyz_mapping_mask_width’, ’xyz_mapping_mask_height’: Specify the width and height of the used filter mask.
For ’xyz_mapping_filter’=’median_separate’ or ’xyz_mapping_filter’=’median’, even values for
’xyz_mapping_mask_width’ or ’xyz_mapping_mask_height’ are increased to the next odd value auto-
matically.
For ’xyz_mapping_filter’=’median’, the used filter mask must be quadratic (’xyz_mapping_mask_width’
= ’xyz_mapping_mask_height’). Thus, when setting only ’xyz_mapping_mask_width’ or
’xyz_mapping_mask_height’, the other parameter is set to the same value automatically. If two differ-
ent values are set, an error is raised.
Suggested values: 3, 5, 7, 9.
Default: 3.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model containing 3D point data.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Smoothing method.
Default: ’mls’
List of values: Method ∈ {’mls’, ’xyz_mapping’, ’xyz_mapping_compute_normals’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of generic smoothing parameters.
Default: []
List of values: GenParamName ∈ {’mls_kNN’, ’mls_order’, ’mls_abs_sigma’, ’mls_relative_sigma’,
’mls_force_inwards’, ’xyz_mapping_filter’, ’xyz_mapping_mask_width’, ’xyz_mapping_mask_height’}
HALCON 24.11.1.0
238 CHAPTER 4 3D OBJECT MODEL
Possible Predecessors
sample_object_model_3d
Possible Successors
create_surface_model, fuse_object_model_3d
Alternatives
smooth_object_model_3d
Module
3D Metrology
’xyz_mapping_max_area_holes’ specifies which area holes of the point coordinates are closed during a simple
Delaunay triangulation. Only holes which are completely surrounded by the image region are closed. If
’xyz_mapping_max_area_holes’ is set to 0, no holes are triangulated. The parameter corresponds to the
GenParamName ’max_area_holes’ of prepare_object_model_3d.
(1)
(2)
(1) In order to triangulate the 3D object model, a 2D mapping of the model is used. The triangulation is based
on the respective 2D neighborhood. Thereby it is possible, that unwanted triangles are created along the
sensor’s direction of view, e.g., because of hidden object parts or clutter data. (2) Whether or not a triangle is
HALCON 24.11.1.0
240 CHAPTER 4 3D OBJECT MODEL
returned, is decided by computing the difference between the normal direction of each triangle and the
viewing direction. The maximum deviation is specified in ’xyz_mapping_max_view_angle’.
Greedy triangulation
By selecting Method=’greedy’, a so called greedy triangulation algorithm is invoked. It requires 3D point data
containing normals. If ObjectModel3D does not contain the normals, they are calculated internally, in an
identical manner to calling surface_normals_object_model_3d with its default parameters before trian-
gulation. The algorithm constructs a surface, which passes through the points and whose surface normals must
be conform to the corresponding point normals up to a given tolerance. The surface is represented by triangular
faces, which are constructed from triplets of neighboring points. In order to determine which triplets qualify for a
surface triangle, the algorithm applies for each point pair the following local neighborhood test, denoted as surface
neighborhood criteria (SNC):
If a point P is lying on a surface, with N being the orientation (normal) of the surface, then a point P 0 with normal
N 0 is considered to lay on this surface if:
• If the resulting surface triangulation looks very disconnected or exhibits many holes, this might be a hint that
r is too small and thus restricts the generation of triangles that are large enough to close the holes. Try to
increase r.
• If the normals data is noisy (i.e., neighboring normals are deviating to a large extend from each other), then
increase α. The source of noisy normals is typically caused either by the sensor, which delivers both the
point and the normals data, or an imprecise normals estimation routine, which computes the normals from
the point data.
• If the point data represents a very curved surface, i.e., it exhibits a very fine structure like, e.g., little buckles,
fine waves or folds, or sharp turns, then make sure the generation of curved data is facilitated by an increasing
α and/or β.
• In contrast, if the data is rather planar but has lots of outliers (i.e., points laying next to the surface, which
have completely different orientations and thus most probably do not belong to it), then decrease α to exclude
them from the surface generation.
• If the point data is very noisy and resembles more a crust than a single-layer surface, then increase β and/or
d to make sure that neighbors for P can still be found even if they are further away from the optimal plane
[P, N ].
• In contrast, if the data is rather noise-free, but two surfaces are running close to each other and are nearly
parallel, e.g., surfaces representing the front and the back side of a thin, plate-like object, then decrease β and
d to avoid interference between the surfaces.
The greedy triangulation algorithm starts by initializing a surface with one triangle constructed from three SNC-
eligible, neighboring points. If all valid neighborhoods show local inconsistencies like collinear or ’double’ points,
an error will be raised. A prior call of sample_object_model_3d with Method set to ’fast’ and a small
SamplingParam will remove most local inconsistencies from ObjectModel3D. Having found one triangle, the
algorithm then greedily constructs new triangles as long as further points can be reached by the SNC rules from
any point on the surface boundaries. If no points can be reached from the current surface, but there are unprocessed
points in the 3D object model, a new surface is initialized. Because the SNC rules are essentially defined only in
the small local neighborhoods of the points, the resulting surface can have global topological artifacts like holes
and flips. The latter occur, when - while it is growing - a surface meets itself but with inverted face orientations (i.e.,
the surface was flipped somewhere while it was growing). These artifacts are handled in special post-processing
steps: hole filling and flip resolving, respectively.
Finally, a mesh morphology can be performed to additionally remove artifacts that occurred on the final surface
boundaries. The mesh morphology consists of several mesh erosion cycles and several subsequent mesh dilation
cycles. With each erosion cycle, all triangles reachable from the surface boundaries are removed and the surface
boundaries shrink. Then, with each dilation cycle all triangles reachable from the surface boundaries are appended
again to the surface and the boundaries expand. Note that this is only possible for triangles, which were removed by
an erosion cycle before that. Therefore, once the original boundaries of the surface (i.e., those which existed before
the mesh erosion cycles) are reached, the dilation cannot advance any further and hence the dilation cycles cannot
be more than the erosion cycles. Applying mesh erosion and dilation subsequently is analogous to performing
opening to standard HALCON regions. At last, the mesh morphology can delete surface pieces which have too
few triangles.
The individual algorithm steps are summarized here:
By setting GenParamName to one of the following values, additional parameters specific for the greedy triangu-
lation can be set with GenParamValue:
’greedy_kNN’ specifies the size k of the neighborhood. While looking for reachable SNC neighbors for a surface
boundary point, the algorithm considers only its closest k neighbors.
Suggested values: 20, 30, 40, 50, 60.
Default: 40.
’greedy_radius_type’: if set to ’fixed’,’greedy_radius_value’ specifies the SNC radius r in meter units.
If set to ’z_factor’, r is calculated for each point P by multiplying its z-coordinate by the value specified
by ’greedy_radius_value’. This representation of r is appropriate for data where the density of the points
correlates with their distance from the sensor they were recorded with. This is typically the case with depth
sensors or TOF cameras.
If set to ’auto’, the algorithm determines internally whether to use a ’fixed’ or a ’z_factor’ radius and estimates
its value. The estimated value is then multiplied by the value specified in ’greedy_radius_value’. This way,
the user specifies a scale factor for the estimated radius.
List of values: ’auto’, ’fixed’, ’z_factor’.
Default: ’auto’.
’greedy_radius_value’: see ’greedy_radius_type’.
Suggested values: 0.01, 0.05, 0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0
’greedy_neigh_orient_tol’: sets the SNC parameter α in degree units. α controls the surface curvature as described
with the SNC rules above.
Suggested values: 10, 20, 30, 40.
Default: 30.
HALCON 24.11.1.0
242 CHAPTER 4 3D OBJECT MODEL
’greedy_neigh_orient_consistent’: enforces that the normals of two neighboring points have the same orientation
(i.e., they do not show in opposite directions). If enabled, this parameter disables the second part of the SNC
criteria for α, i.e., if 6 (N, N 0 ) > α, the criteria fails even if 6 (N, −N 0 ) ≤ α.
List of values: ’true’, ’false’.
Default: ’false’.
’greedy_neigh_latitude_tol’: sets the SNC parameter β in degree units. β controls the surface neighborhood
latitude window as described with the SNC rules above.
Suggested values: 10, 20, 30 40.
Default: 30.
’greedy_neigh_vertical_tol’: sets the SNC parameter d as a factor of the radius r.
Suggested values: 0.01, 0.1, 0.2, 0.3.
Default: 0.1.
’greedy_hole_filling’: sets the length of surface boundaries (in number of point vertices) that should be considered
for the hole filling. If ’false’ is specified, then the hole filling step is disabled.
Suggested values: ’false’, 20, 40, 60.
Default: 40.
’greedy_fix_flips’: enables/disables the flip resolving step of the algorithm.
List of values: ’true’, ’false’.
Default: ’true’.
’greedy_prefetch_neighbors’: enables/disables prefetching of lists of the k nearest neighbors for all points. This
prefetching improves the algorithm speed, but has high memory requirements (O(kn), where k is the number
specified by ’greedy_kNN’, and n is the number of points in ObjectModel3D). For very large data, it might
be impossible to preallocate such a big amount of memory, results in a memory error message. In such a case
the prefetching must be disabled.
List of values: ’true’, ’false’ Default: ’true’.
’greedy_mesh_erosion’: specifies the number of erosion cycles applied to the final mesh.
Suggested values: 0, 1, 2, 3.
Default: 0.
’greedy_mesh_dilation’: specifies the number of dilation cycles. The mesh dilation is applied after the mesh
erosion. If ’greedy_mesh_dilation’ is set to a greater value than ’greedy_mesh_erosion’, it will be reduced
internally to the value of ’greedy_mesh_erosion’.
Suggested values: 0, 1, 2, 3 Default: 0.
’greedy_remove_small_surfaces’: controls the criteria for removing small surface pieces. If set to ’false’, the
small surface removal is disabled. If set to a value between 0.0 and 1.0, all surfaces having less triangles
than ’greedy_remove_small_surfaces’×num_triangles will be removed, where num_triangles is
the total number of triangles generated by the algorithm. If set to a value greater than 1, all surfaces having
less triangles than ’greedy_remove_small_surfaces’ will be removed.
Suggested values: ’false’, 0.01, 0.05, 0.1, 10, 100, 1000, 10000.
Default: ’false’.
’greedy_timeout’: using a timeout, it is possible to interrupt the operator after a defined period of time in seconds.
This is especially useful in cases, where a maximum cycle time has to be ensured. The temporal accuracy of
this interrupt is about 10 ms. Passing values less then zero is not valid. Setting ’greedy_timeout’ to ’false’
deactivates the timeout, which corresponds to the default. The temporal accuracy depends on several factors
including the size of the model, the speed of your computer, and the ’timer_mode’ set via set_system.
Suggested values: ’false’, 0.1, 0.5, 1, 10, 100.
Default: ’false’.
’greedy_suppress_timeout_error’: by default, if a timeout occurs the operator returns a timeout error code. By
setting ’greedy_suppress_timeout_error’ to ’true’ instead, the operator returns no error and the intermediate
results of the triangulation are returned in TriangulatedObjectModel3D. With the error suppressed,
the occurrence of a timeout can be checked by querying the list of values returned in Information (in
’verbose’ mode) by looking for the value corresponding to ’timeout_occured’.
List of values: ’false’, ’true’.
Default: ’false’.
’greedy_output_all_points’: controls, if all input points are returned, regardless whether they were used
in the output triangulation or not. Mainly provided for reasons of backward compatibility. When
’greedy_output_all_points’ is set to ’false’, the old point indices are stored as an extended attribute named
’original_point_indices’ in the 3D object model TriangulatedObjectModel3D. This attribute can sub-
sequently be queried with get_object_model_3d_params or be processed with other operators that
use extended attributes.
List of values: ’false’, ’true’.
Default: ’false’.
’information’: specifies, which intermediate results shall be reported in Information. By default (’informa-
tion’=’num_triangles’), the number of generated triangles is reported. For ’information’=’verbose’, a list of
name-value information pairs is returned. Currently, the following information is reported:
HALCON 24.11.1.0
244 CHAPTER 4 3D OBJECT MODEL
Implicit triangulation
By selecting Method=’implicit’ an implicit triangulation algorithm based on a Poisson solver (see the paper in
References) is invoked. It constructs a water-tight surface, i.e., it is completely closed. The implicit triangulation
requires 3D point data containing normals. Additionally, it is required that the 3D normals are pointing strictly
inwards or strictly outwards regarding the volume enclosed by the surface to be reconstructed. Unlike the ’greedy’
algorithm, the ’implicit’ algorithm does not construct the surface through the input 3D points. Instead, it constructs
a surface that approximates the original 3D data and creates a new set of 3D points lying on this surface.
First, the algorithm organizes the point data in an adaptive octree structure: the volume of the bounding box
containing the point data is split in the middle in each dimension resulting in eight sub-volumes, or octree voxels.
Voxels still containing enough point data can be split in further eight sub-voxels. Voxels that contain no or just
few points must not be split further. This splitting is repeated recursively in regions of dense 3D point data until
the resulting voxels contain no or just few points. The recursion level of the voxel splits, reached with the smallest
voxels, is denoted as depth of the octree.
In the next step, the algorithm estimates the values of the so-called implicit indicator function of the surface, based
on the assumption that the points from ObjectModel3D are lying on the surface of an object and the normals of
the points in ObjectModel3D are pointing inwards that object (see the paper in References). This assumption
explains the requirement of mutually consistent normal orientations. The implicit function has a value of 1 in
voxel corners that are strictly inside the body and 0 for voxel corners strictly outside of it. Due to noisy data, voxel
corners that are close to the boundary of the object cannot be ’labeled’ unambiguously. Therefore, they receive a
value between 0 and 1.
The implicit surface defined by the indicator function is a surface, such that each point lying on it has an indicator
value of 0.5. The implicit algorithm uses a standard marching cubes algorithm to compute the intersection points
of the implicit surface with the sides of the octree voxels. The intersection points result in the new set of 3D
points spanning the surface returned in TriangulatedObjectModel3D. As a consequence, the resolution of
the surface details reconstructed in TriangulatedObjectModel3D depends directly on the resolution of the
octree (i.e., on its depth).
By setting GenParamName to one of the following values, additional parameters specific for the implicit trian-
gulation can be set with GenParamValue:
’implicit_octree_depth’: sets the depth of the octree. The octree depth controls the resolution of the surface gen-
eration - a higher depth leads to a higher surface resolution. The octree depth has an exponential effect on the
runtime and an exponential effect on the memory requirements of the octree. Therefore, the depth is limited
to 12.
Restriction: 5 ≤ ’implicit_octree_depth’ ≤ 12.
Suggested values: 5, 6, 8, 10, 11, 12.
Default: 6.
’implicit_solver_depth’: enables an alternative algorithm, which can prepare the implicit function up to a user
specified octree depth, before the original algorithm takes over the rest of the computations. This algorithm
requires less memory than the original one, but is a bit slower.
Restriction: ’implicit_solver_depth’ ≤ ’implicit_octree_depth’.
Suggested values: 2, 4, 6, 8, 10, 11, 12.
Default: 6.
’implicit_min_num_samples’: sets the minimal number of point samples required per octree voxel node. If the
number of points in a voxel is less than this value, the voxel is not split any further. For noise free data, this
value can be set low (e.g., between 1-5). For noisy data, this value should be set higher (e.g., 10-20), such
that the noisy data is accumulated in single voxel nodes to smooth the noise.
Suggested values: 1 5, 10, 15, 20, 30.
Default: 1.
’information’: specifies, which intermediate results shall be reported in Information. By default (’informa-
tion’=’num_triangles’), the number of generated triangles is reported. For ’information’=’verbose’, a list of
name-value information pairs is returned. Currently, the following information is reported:
Name Value Description
’num_triangles’ <number of triangles> returns the number of generated triangular faces.
’num_points’ <number of points> returns the number of generated points.
List of values: ’num_triangles’, ’verbose’.
Default: ’num_triangles’.
HALCON 24.11.1.0
246 CHAPTER 4 3D OBJECT MODEL
where:
N : number of points
k: size of the neighborhood
D: depth of the octree
Depending on the number of points in ObjectModel3D, noise, and specific structure of the data, both algorithms
deliver different results and perform with different time and memory complexity. The greedy algorithm works fast,
requires less memory, and returns a high level of details in the reconstructed surface for rather small data sets
(up to, e.g., 500.000 points). Since the algorithm must basically process every single point in the data, its time
performance cannot be decoupled from the point number and it can be rather time consuming for more than 500.000
points. If large point sets need to be triangulated with this method anyway, it is recommended to first sub-sample
them via sample_object_model_3d.
In contrast, as described above, the implicit algorithm organizes all points in an underlying octree. Therefore, the
details returned by it, its speed, and its memory consumption are dominated by the depth of the octree. While
higher levels of surface details can only be achieved at disproportionately higher time and memory costs, the
octree offers the advantage that it handles large point sets more efficiently. With the octree, the performance of the
implicit algorithm depends mostly on the depth of the octree and to a lesser degree on the number of points to be
processed. One further disadvantage of the implicit algorithm is its requirement that the adjacent point normals are
strictly consistent. This requirement can seldom be fulfilled by usual normal estimation routines.
Parameters
. ObjectModel3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .object_model_3d(-array) ; handle
Handle of the 3D object model containing 3D point data.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Triangulation method.
Default: ’greedy’
List of values: Method ∈ {’greedy’, ’implicit’, ’polygon_triangulation’, ’xyz_mapping’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic triangulation parameters.
Default: []
List of values: GenParamName ∈ {’information’, ’implicit_octree_depth’, ’implicit_solver_depth’,
’implicit_min_num_samples’, ’greedy_radius_type’, ’greedy_radius_value’, ’greedy_kNN’,
’greedy_neigh_orient_tol’, ’greedy_neigh_orient_consistent’, ’greedy_neigh_vertical_tol’,
’greedy_neigh_latitude_tol’, ’greedy_hole_filling’, ’greedy_fix_flips’, ’greedy_mesh_erosion’,
’greedy_mesh_dilation’, ’greedy_remove_small_surfaces’, ’greedy_prefetch_neighbors’, ’greedy_timeout’,
’greedy_suppress_timeout_error’, ’greedy_output_all_points’, ’xyz_mapping_max_area_holes’,
’xyz_mapping_output_all_points’, ’xyz_mapping_max_view_angle’, ’xyz_mapping_max_view_dir_x’,
’xyz_mapping_max_view_dir_y’, ’xyz_mapping_max_view_dir_z’}
Possible Predecessors
read_object_model_3d, gen_plane_object_model_3d, gen_sphere_object_model_3d,
gen_cylinder_object_model_3d, gen_box_object_model_3d,
gen_sphere_object_model_3d_center, sample_object_model_3d
Possible Successors
write_object_model_3d, render_object_model_3d, project_object_model_3d,
simplify_object_model_3d
References
M. Kazhdan, M. Bolitho, and H. Hoppe: “Poisson Surface Reconstruction.” Symposium on Geometry Processing
(June 2006).
Module
3D Metrology
xyz_to_object_model_3d ( X, Y, Z : : : ObjectModel3D )
HALCON 24.11.1.0
248 CHAPTER 4 3D OBJECT MODEL
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
disparity_image_to_xyz, get_sheet_of_light_result
Alternatives
gen_object_model_3d_from_points, get_sheet_of_light_result_object_model_3d
See also
read_object_model_3d
Module
3D Metrology
3D Reconstruction
249
250 CHAPTER 5 3D RECONSTRUCTION
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
Note that the methods ’sad’ and ’ssd’ compare the gray values of the pixels within a mask window directly, whereas
’ncc’ compensates for the mean gray value and its variance within the mask window. Therefore, if the two images
differ in brightness and contrast, this method should be preferred. For images with similar brightness and contrast
’sad’ and ’ssd’ are to be preferred as they are faster because of less complex internal computations.
It should be noted, that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window, referenced by 2m + 1 and 2n + 1, has to be odd numbered and is passed in
MaskWidth and MaskHeight. The search space is confined by the minimum and maximum disparity value
MinDisparity and MaxDisparity. Due to pixel values not defined beyond the image border the resulting
domain of Disparity and Score is not set along the image border within a margin of height (MaskHeight-
1)/2 at the top and bottom border and of width (MaskWidth-1)/2 at the left and right border. For the same reason,
the maximum disparity range is reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum statistical
spread of gray values within the correlation window can be defined in TextureThresh. This threshold is applied
on both input images ImageRect1 and ImageRect2. In addition, ScoreThresh guarantees the matching
quality and defines the maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation
function. Setting Filter to ’left_right_check’, moreover, increases the robustness of the returned matches, as the
result relies on a concurrent direct and reverse match, whereas ’none’ switches it off.
The number of pyramid levels used to improve the time response of binocular_disparity is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages of similar disparity to reduce the disparity range on the next lower pyramid level.
TextureThresh and ScoreThresh are applied on every level and the returned domain of the Disparity
and Score images arises from the intersection of the resulting domains of every single level. Generally, pyramid
structures are the more advantageous the more the disparity image can be segmented into regions of homogeneous
disparities and the bigger the disparity range is specified. As a drawback, coarse pyramid levels might loose
important texture information which can result in deficient disparity values.
Finally, the value ’interpolation’ for parameter SubDisparity performs subpixel refinement of disparities. It is
switched off by setting the parameter to ’none’.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 2.
. Disparity (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Disparity map.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Evaluation of the disparity values.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Matching function.
Default: ’ncc’
List of values: Method ∈ {’sad’, ’ssd’, ’ncc’}
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the correlation window.
Default: 11
Suggested values: MaskWidth ∈ {5, 7, 9, 11, 21}
Restriction: 3 <= MaskWidth && odd(MaskWidth)
HALCON 24.11.1.0
252 CHAPTER 5 3D RECONSTRUCTION
Result
binocular_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion is raised.
Execution Information
Compute the disparities of a rectified stereo image pair using multigrid methods.
binocular_disparity_mg calculates the disparity between two rectified stereo images ImageRect1 and
ImageRect2 and returns it in Disparity. In contrast to binocular_disparity, a variational approach
based on multigrid methods is used. This approach returns disparity values also for image parts that contain no
texture. In contrast to binocular_distance_mg, the results are not transformed into distance values.
The input images must be a pair of rectified stereo images, i.e., corresponding points must have the same vertical
coordinate. The images can have different widths, but must have the same height. The runtime of the operator is
approximately linear in the size of the images.
The disparity is the amount by which each point in the first image ImageRect1 needs to be moved to reach its
corresponding point in the second image ImageRect2. Two points are called corresponding if they are the image
of the same point in the original scene. The calculated disparity field is dense and estimates the disparity also for
points that do not have a corresponding point. The disparity is calculated only for those lines that are part of the
domains of both input images. More exactly, the domain of the disparity map is calculated as the intersection of
heights of the smallest enclosing rectangles of the domains of the input images.
The calculated disparity field is usually not perfect. If the parameter CalculateScore is set to ’true’, a quality
measure for the disparity is estimated for each pixel and returned in Score, which is a gray value image with a
range from 0 to 10, where 0 is the best quality and 10 the worst. For this, the reverse disparity field from the second
to the first image is calculated and compared to the returned disparity field. Because of this, the runtime roughly
doubles when computing the score.
The operator uses a variational approach, where an energy value is assigned to each possible disparity field. Dis-
parity fields with a lower energy are better than those with a high energy. The operator calculates a disparity field
with the minimum energy and returns it.
The energy assigned to a disparity field consists of a data term and a smoothness term. The data term models
the fact that corresponding points are images of the same part of the scene and thus have equal gray values. The
smoothness term models the fact that the imaged scene and with it its disparity field is piecewise smooth, which
leads to an interpolation of the disparity into areas with low information from the data term, e.g., areas with no
texture.
The details of the assumptions are as follows:
Constancy of the gray values: It is assumed that corresponding points have the same gray value, i.e., that I1 (x, y) =
I2 (x + u(x, y), y).
Constancy of the gray value gradients: It is assumed that corresponding points have the same gray value gradient,
i.e., that ∇I1 (x, y) = ∇I2 (x + u(x, y), y)). Discrepancies from this assumption are modeled using the L2 norm
of the difference of the two gradients. The gray value gradient has the advantage of being invariant to additive
illumination changes between the two images.
Statistical robustness in the data term: To reduce the influence of outliers, i.e., points that violate
√ the constancy
assumptions, they are penalized in a statistically robust manner via the total variation Ψ(x) = x + 2 , where
= 0.01 is a fixed regularization constant.
Smoothness of the disparity field: It is assumed that the resulting disparity field is piecewise smooth. This is
modeled by the L2 norm of the derivative of the disparity field.
Statistical robustness in the smoothness term: Analogously to the data term, the statistically robust total variation
is applied to the smoothness term to reduce the influence of outliers. This is especially important for preserving
edges in the disparity field that appear on object boundaries.
The energy functional is the integral of a linear combination of the above terms over the area of the first image. The
coefficients of the linear combination are parameters of the operator and allow a fine tuning of the model to a spe-
cific situation. GrayConstancy determines the influence of the gray value constancy, GradientConstancy
the influence of the constancy of the gray value gradient, and Smoothness the influence of the smoothness term.
The first two parameters need to be adapted to the gray value interval of the images. The proposed parameters are
valid for images with a gray value range of 0 to 255.
Let I1 (x, y) be the gray value of the first image at the coordinates (x, y), I2 (x, y) the gray value of the second
image, and u(x, y) the value of the disparity at the coordinate (x, y). Then, the energy functional is then given by
Z
E= Ψ GrayConstancy · (I2 (x + u(x, y), y) − I1 (x, y))2 +
| {z }
gray value constancy
GradientConstancy · |∇I2 (x + u(x, y)) − ∇I1 (x, y)|2
| {z }
gradient constancy
+ Smoothness · Ψ |∇u(x, y)|2 dx dy
| {z }
smoothness
It is assumed that the disparity field u that minimizes the functional E satisfies the above assumptions and is thus
a good approximation of the disparity between the two images.
The above functional is minimized by finding the roots of the Euler-Lagrange equation (ELE) of the integral. This
is comparable to finding the extremal values of a one-dimensional function by searching the roots of its derivative.
The ELE is a nonlinear partial differential equation over the region of the integral, which needs to be 0 for extrema
of E. Since the functional typically does not have any maxima, the corresponding roots of the ELE correspond to
the minima of the functional.
The following techniques are used to find the roots of the ELE:
Fixed point iteration: The ELE is solved by converting it to a fixed point iteration that iteratively approaches the
solution. The number of iterations can be used to balance between speed and accuracy of the solution. Each step
of the fixed point iteration consists of solving a linear partial differential equation.
Coarse-to-fine process: A Gaussian image pyramid of the stereo images is created. The ELE is first solved on a
coarse level of the pyramid and the solution is taken as the initial value of the fixed point iteration of the next level.
This has a number of advantages and disadvantages:
HALCON 24.11.1.0
254 CHAPTER 5 3D RECONSTRUCTION
1. Since the fixed point iteration of the next level receives a good initial value, fewer iterations are necessary to
archive a good accuracy. The iteration must perform only small corrections of the disparity.
2. Large disparities on the original images become small disparities on the coarse grid levels and can thus be
calculated more easily.
3. The robustness against noise in the images is increased because most kinds of noise disappear on the coarse
version of the images.
4. Problems arise with small structures that have a large disparity difference to their surroundings since they
disappear on coarse versions of the image and thus the disparity of the surroundings is calculated. This error will
not be corrected on the finer levels of the image pyramid since only small corrections are calculated there.
Multigrid methods: The linear partial differential equations that arise in the fixed point iteration at each pyramid
level are converted into a linear system of equations through linearization. These linear systems are solved
using iterative solvers. Multigrid methods are among the most efficient solvers for the kind of linear systems
that arise here. They use the fact that classic iterative solvers, like the Gauss-Seidel solver, quickly reduce
the high frequency parts of the error, but only slowly reduce the low frequency parts. Multigrid methods thus
calculate the error on a coarser grid where the low frequency part of the error appears as high frequencies
and can be reduced quickly by the classical solvers. This is done hierarchically, i.e., the computation of
the error on a coarser resolution level itself uses the same strategy and efficiently computes its error (i.e.,
the error of the error) by correction steps on an even coarser resolution level. Depending on whether one
or two error correction steps are performed per cycle, a so called V or W cycle is obtained. The corre-
sponding strategies for stepping through the resolution hierarchy are as follows for two to four resolution levels:
Fine
V-Cycles W-Cycles
1 u u u u u u u u u u u u
AAs A s s As s
AAs A s s s As s s
2 A A A A A
As As s As As As s s As s s
A
3
AAs AAsAAs AsAAs
4
Coarse
Here, iterations on the original problem are denoted by large markers, while small markers denote iterations on
error correction problems.
Algorithmically, a correction cycle can be described as follows:
1. In the first step, several (few) iterations using an interactive linear or nonlinear basic solver are performed (e.g.,
a variant of the Gauss-Seidel solver). This step is called pre-relaxation step.
2. In the second step, the current error is computed to correct the current solution (the solution after step 1).
For efficiency reasons, the error is calculated on a coarser resolution level. This step, which can be performed
iteratively several times, is called coarse grid correction step.
3. In a final step, again several (few) iterations using the interactive linear or nonlinear basic solver of step 1 are
performed. This step is called post-relaxation step.
In addition, the solution can be initialized in a hierarchical manner. Starting from a very coarse variant of the
original linear equation system, the solution is successively refined. To do so, interpolated solutions of coarser
variants of the equation system are used as the initialization of the next finer variant. On each resolution level
itself, the V or W cycles described above are used to efficiently solve the linear equation system on that resolution
level. The corresponding multigrid methods are called full multigrid methods in the literature. The full multigrid
algorithm can be visualized as follows:
Coarse
This example represents a full multigrid algorithm that uses two W correction cycles per resolution level of the
hierarchical initialization. The interpolation steps of the solution from one resolution level to the next are denoted
by i and the two W correction cycles by w1 and w2 . Iterations on the original problem are denoted by large
markers, while small markers denote iterations on error correction problems.
Depending on the selected multigrid solver, a number of parameters for fine tuning the solver are available and are
described in the following.
The parameter InitialGuess gives a initial value for the initialization of the fixed point iteration on the coarsest
grid. Usually 0 is sufficient, but to avoid local minima other values can be used.
Using the parameters MGParamName and MGParamValue, the solver is controlled, i.e., the coarse-to-fine pro-
cess, the fixed point iteration, and the multigrid solver. It is usually sufficient to use one of the predefined pa-
rameter sets, which are available by setting MGParamName = ’default_parameters’ and MGParamValue =
’very_accurate’, ’accurate’, ’fast_accurate’, or ’fast’.
If the parameters should be specified individually, MGParamName and MGParamValue must be set to tuples of
the same length. The values corresponding to the parameters specified in MGParamName must be specified at
the corresponding position in MGParamValue. The parameters are evaluated in the given order. Therefore, it is
possible to first select a group of default parameters (see above) and then change only some of the parameters. in
the following, the possible parameters are described.
MGParamName = ’mg_solver’ sets the solver for the linear system. Possible values for MGParamValue are
’multigrid’ for a simple multigrid solver, ’full_multigrid’ for a full multigrid solver, and ’gauss_seidel’ for the plain
Gauss-Seidel solver. The multigrid methods have the advantage of a faster convergence, but incur the overhead of
coarsening the linear system.
MGParamName = ’mg_cycle_type’ selects the type of recursion for the multigrid solvers. Possible values for
MGParamValue are ’v’ for a V-Cycle, ’w’ for a W-Cycle, and ’none’ for no recursion.
MGParamName = ’mg_pre_relax’ sets the number of iterations of the pre-relaxation step in multigrid solvers, or
the number of iterations for the Gauss-Seidel solver, depending on which is selected.
MGParamName = ’mg_post_relax’ sets the number of iterations of the post-relaxation step.
Increasing the number of pre- and post-relaxation steps increases the computation time asymptotically linearly.
However, no additional restriction and prolongation operations (zooming down and up of the error correction
images) are performed. Consequently, a moderate increase in the number of relaxation steps only leads to a slight
increase in the computation times.
MGParamName = ’initial_level’ sets the coarsest level of the image pyramid where the coarse-to-fine process
starts. The value can be positive, in which case it directly gives the initial level. Level 0 is the finest level with the
original images. If the value is negative, then it is used relative to the maximum number of pyramid levels. The
coarsest available pyramid level is the one where both images have a size of at least 4 pixels in both directions. As
described below, the default value of ’initial_level’ is -2. This facilitates the calculation of the correct disparity for
images that have very large disparities. In some cases, e.g., for repeating textures, this may lead to the fact that too
large disparities are calculated for some parts of the image. In this case, ’initial_level’ should be set to a smaller
value.
The standard parameters zoom the image with a factor of 0.6 per pyramid level. If a guess of the maximum
disparity d exists, then the initial level s should be selected so that 0.6−s is greater than d.
MGParamName = ’iterations’ sets the number of iterations of the fixed point iteration per pyramid level. The
exact number of iterations is steps = min(10, iterations + level2 ), where level is the current level in the image
pyramid. If this value is set to 0, then no iteration is performed on the finest pyramid level 0. Instead, the result of
HALCON 24.11.1.0
256 CHAPTER 5 3D RECONSTRUCTION
level 1 is scaled to the original image size and returned, which can be used if speed is crucial. The runtime of the
operator is approximately linear in the number of iterations.
MGParamName = ’pyramid_factor’ determines the factor by which the images are scaled when creating the image
pyramid for the coarse-to-fine process. The width and height of the next smaller image is scaled by the given factor.
The value must lie between 0.1 and 0.9.
The predefined parameter sets for MGParamName = ’default_parameters’ contain the following values:
’default_parameters’ = ’very_accurate’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_pre_relax’
= 5, ’mg_post_relax’ = 5, ’initial_level’ = -2, ’iterations’ = 5, ’pyramid_factor’ = 0.6.
’default_parameters’ = ’accurate’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’w’, ’mg_pre_relax’ = 5,
’mg_post_relax’ = 5, ’initial_level’ = -2, ’iterations’ = 2, ’pyramid_factor’ = 0.6.
’default_parameters’ = ’fast_accurate’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’v’, ’mg_pre_relax’
= 2, ’mg_post_relax’ = 2, ’initial_level’ = -2, ’iterations’ = 1, ’pyramid_factor’ = 0.6. These are the default
parameters of the algorithm if the default parameter set is not specified.
’default_parameters’ = ’fast’: ’mg_solver’ = ’full_multigrid’, ’mg_cycle_type’ = ’v’, ’mg_pre_relax’ = 1,
’mg_post_relax’ = 1, ’initial_level’ = -2, ’iterations’ = 0, ’pyramid_factor’ = 0.6.
Weaknesses of the operator: Large jumps in the disparity, which correspond to large jumps in the distance of the
observed objects, are smoothed rather strongly. This leads to problems with thin objects that have a large distance
to their background.
Distortions can occur at the left and right border of the image in the parts that are visible in only one of the images.
Additionally, general problems of stereo vision should be avoided, including horizontally repetitive patterns, areas
with little texture as well as reflections.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 2.
. Disparity (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Disparity map.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Score of the calculated disparity if CalculateScore is set to ’true’.
. GrayConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gray value constancy in the data term.
Default: 1.0
Suggested values: GrayConstancy ∈ {0.0, 1.0, 2.0, 10.0}
Restriction: GrayConstancy >= 0.0
. GradientConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gradient constancy in the data term.
Default: 30.0
Suggested values: GradientConstancy ∈ {0.0, 1.0, 5.0, 10.0, 30.0, 50.0, 70.0}
Restriction: GradientConstancy >= 0.0
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the smoothness term in relation to the data term.
Default: 5.0
Suggested values: Smoothness ∈ {1.0, 3.0, 5.0, 10.0}
Restriction: Smoothness > 0.0
. InitialGuess (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Initial guess of the disparity.
Default: 0.0
Suggested values: InitialGuess ∈ {-30.0, -20.0, -10.0, 0.0, 10.0, 20.0, 30.0}
. CalculateScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should the quality measure should be returned in Score?
Default: ’false’
Suggested values: CalculateScore ∈ {’true’, ’false’}
Result
If the parameter values are correct, binocular_disparity_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Execution Information
Compute the disparities of a rectified stereo image pair using multi-scanline optimization.
binocular_disparity_ms calculates the disparity between two rectified stereo images ImageRect1 and
ImageRect2 using multi-scanline optimization. The resulting disparity image is returned in Disparity. In
contrast to binocular_distance_ms, the results are not transformed into distance values.
For this task, the three operators binocular_disparity, binocular_disparity_mg, and
binocular_disparity_ms can be used. binocular_disparity returns robust results in regions of
HALCON 24.11.1.0
258 CHAPTER 5 3D RECONSTRUCTION
sufficient texture but fails where there is none. binocular_disparity_mg interpolates low-texture regions
but blurs discontinuities. binocular_disparity_ms preserves discontinuities and interpolates partially.
binocular_disparity_ms requires a reference image ImageRect1 and a search image ImageRect2
which both must be rectified, i.e., corresponding pixels must have the same row coordinate. If this
assumption is violated, the images can be rectified by using the operators calibrate_cameras,
gen_binocular_rectification_map, and map_image.
ImageRect1 and ImageRect2 can have different widths but must have the same height. Given a pixel in
ImageRect1, the homologous pixel in ImageRect2 is selected by searching along the corresponding row in
ImageRect2 and matching both pixels based on a similarity measure. The disparity is the number of pixels by
which each pixel in ImageRect1 needs to be moved to reach the homologous pixel in ImageRect2.
The search space is confined by the minimum and maximum disparity values MinDisparity and
MaxDisparity. If the minimum and maximum disparity values are set to an empty tuple, they are automat-
ically set to the maximum possible range for the given images ImageRect1.
To calculate the disparities from the similarity measure, the intermediate results are optimized by a multi-scanline
method. The optimization increases the robustness in low-texture areas without blurring discontinuities in the
disparity image. The optimization is controlled by the parameters SurfaceSmoothing and EdgeSmoothing.
SurfaceSmoothing controls the smoothness within surfaces. High values suppress disparity differences of one
pixel. EdgeSmoothing controls the occurrence and the shape of edges. Low values allow many edges, high
values lead to fewer and rounder edges. For both parameters, reasonable values usually range between 0 and 100.
If both parameters are set to zero, no optimization is performed.
The calculation of the disparities can be controlled by generic parameters. The following generic parameters
GenParamName and the corresponding values GenParamValue are supported:
’consistency_check’ Activates an optional post-processing step to increase robustness. Concurrent direct and re-
verse matches between reference patterns in ImageRect1 and ImageRect2 are required for a disparity
value to be returned. The check is switched off by setting GenParamValue to ’false’.
List of values: ’true’, ’false’.
Default: ’true’.
’disparity_offset’ Adapts the quality of the coarse-to-fine approach at discontinuities. The higher the value set in
GenParamValue, the more runtime is required.
Suggested values: 2, 3, 4.
Default: 3.
’method’: Determines the method used to calculate the disparities. The following parameters GenParamValue
can be set:
• ’accurate’: Most accurate calculation method, but requires more runtime and memory compared to the
remaining methods.
• ’fast’: Uses a coarse-to-fine scheme to improve the runtime. The coarse-to-fine scheme works in a
similar way to the scheme explained in binocular_disparity.
The coarse-to-fine method requires significantly less memory and is significantly faster than the ’accu-
rate’ method, especially for large images or a large range of MinDisparity and MaxDisparity.
The coarse-to-fine scheme has the further advantage that it automatically estimates the range of
MinDisparity and MaxDisparity while traversing through the pyramid. As a consequence, nei-
ther MinDisparity nor MaxDisparity needs to be set. However, the generated disparity images
are less accurate for the ’fast’ method than for the default ’accurate’ approach. Especially at sharp
disparity jumps the ’fast’ method preserves discontinuities less accurately.
• ’very_fast’: Also uses a coarse-to-fine scheme to improve the runtime even further. However, this ap-
proach makes numerous assumptions that may lead to a smoothing of the disparities at discontinuities.
Per default, the number of levels of the coarse-to-fine scheme is estimated automatically. It is possible
to set the number of levels explicitly (see ’num_levels’).
The runtime of the operator is approximately linear to the image width, the image height, and the disparity
range. Consequently, the disparity range should be chosen as narrow as possible for large images. The
runtime of the coarse-to-fine scheme (which is used for ’fast’ or ’very_fast’) is approximately linear to the
image width and the image height. For small images and small disparity ranges the runtime of the coarse-to-
fine scheme may be larger than that of the ’accurate’ scheme.
List of values: ’accurate’, ’fast’, ’very_fast’.
Default: ’accurate’.
’num_levels’: Determines the number of pyramid levels that are used for the coarse-to-fine scheme. By setting
GenParamValue to ’auto’, the number of pyramid levels is automatically calculated.
Suggested values: 2, 3, ’auto’.
Default: ’auto’.
’similarity_measure’: Sets the similarity measure to be used. For both options ’census_dense’ (default) and ’cen-
sus_sparse’, the similarity measure is based on the Census transform. A Census transformed image contains
for every pixel information about the intensity topology within a support window around it.
• ’census_dense’: Uses a dense 9 x 7 pixels window and is more suitable for fine structures.
• ’census_sparse’: Uses a sparse 15 x 15 pixels window where only a subset of the pixels is evaluated. Is
more robust in low-texture areas.
List of values: ’census_dense’, ’census_sparse’.
Default: ’census_dense’.
’sub_disparity’: Activates sub-pixel refinement of disparities when set to ’true’. Can be deactivated by setting
’false’.
List of values: ’true’, ’false’.
Default: ’true’.
The resulting disparity is returned in the single-channel image Disparity. A quality measure for each disparity
value is returned in Score, containing the best (lowest) result of the optimized similarity measure of a reference
pixel.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 2.
. Disparity (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Disparity map.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Score of the calculated disparity.
. MinDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum of the expected disparities.
Default: -30
Value range: -32768 ≤ MinDisparity ≤ 32768
Restriction: MinDisparity <= MaxDisparity
. MaxDisparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum of the expected disparities.
Default: 30
Value range: -32768 ≤ MaxDisparity ≤ 32768
Restriction: MinDisparity <= MaxDisparity
. SurfaceSmoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Smoothing of surfaces.
Default: 50
Suggested values: SurfaceSmoothing ∈ {20, 50, 100}
Restriction: SurfaceSmoothing >= 0
. EdgeSmoothing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Smoothing of edges.
Default: 50
Suggested values: EdgeSmoothing ∈ {20, 50, 100}
Restriction: EdgeSmoothing >= 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Parameter name(s) for the multi-scanline algorithm.
Default: []
List of values: GenParamName ∈ {’method’, ’similarity_measure’, ’consistency_check’, ’sub_disparity’,
’num_levels’, ’disparity_offset’}
HALCON 24.11.1.0
260 CHAPTER 5 3D RECONSTRUCTION
Result
If the parameter values are correct, binocular_disparity_ms returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
Execution Information
Compute the distance values for a rectified stereo image pair using correlation techniques.
binocular_distance computes the distance values for a rectified stereo image pair using correlation tech-
niques. The operator first calculates the disparities between the two images ImageRect1 and ImageRect2
similar to binocular_disparity. The resulting disparities are transformed into distance values of the cor-
responding 3D world points to the rectified stereo camera system as in disparity_to_distance. The dis-
tances are returned in the single-channel image Distance in which each gray value represents the distance of the
respective 3D world point to the stereo camera system.
The algorithm requires a reference image ImageRect1 and a search image ImageRect2 which must be
rectified, i.e., corresponding epipolar lines are parallel and lie on identical image rows ( r1 = r2 ). In
case this assumption is violated the images can be rectified by using the operators calibrate_cameras,
gen_binocular_rectification_map and map_image. Hence, given a pixel in the reference image
ImageRect1 the homologous pixel in ImageRect2 is selected by searching along the corresponding row
in ImageRect2 and matching a local neighborhood within a rectangular window of size MaskWidth and
MaskHeight. For each defined reference pixel the pixel correspondences are transformed into distances of
the world points defined by the intersection of the lines of sight of a corresponding pixel pair to the z = 0 plane of
the rectified stereo system.
For this transformation the rectified internal camera parameters CamParamRect1 of camera 1 and
CamParamRect2 of camera 2, and the pose with the external parameters RelPoseRect have to be
defined. Latter one is of the form ccsR1 PccsR2 and characterizes the relative pose of both cameras
to each other. More precisely, it specifies the point transformation from the rectified camera system 2
(ccsR2) into the rectified camera system 1 (ccsR1), see Transformations / Poses and “Solution Guide
III-C - 3D Vision”. These parameters can be obtained from the operator calibrate_cameras and
gen_binocular_rectification_map. After all, a quality measure for each distance value is returned in
Score, containing the best result of the matching function S of a reference pixel. For the matching, the gray
values of the original unprocessed images are used.
with
r1, c1, r2, c2: row and column coordinates of the corresponding pixels of the two input images,
g1, g2: gray values of the unprocessed input images,
N = (2m + 1)(2n + 1): size of correlation window
r+m c+n
ḡ(r, c) = N1 g(r0 , c0 ): mean value within the correlation window of width 2m+1 and height 2n+1.
P P
r 0 =r−m c0 =c−n
Note that the methods ’sad’ and ’ssd’ compare the gray values of the pixels within a mask window directly,
whereas ’ncc’ compensates for the mean gray value and its variance within the mask window. Therefore, if the
two images differ in brightness and contrast, this method should be preferred. For images with similar brightness
and contrast ’sad’ and ’ssd’ are to be preferred as they are faster because of less complex internal computations.
See binocular_disparity for further details.
It should be noted that the quality of correlation for rising S is falling in methods ’sad’ and ’ssd’ (the best quality
value is 0) but rising in method ’ncc’ (the best quality value is 1.0).
The size of the correlation window (2m + 1 and 2n + 1) has to be odd numbered and is passed in MaskWidth and
MaskHeight. The search space is confined by the minimum and maximum disparity value MinDisparity and
MaxDisparity. Due to pixel values not defined beyond the image border the resulting domain of Distance
and Score is generally not set along the image border within a margin of height MaskHeight/2 at the top
and bottom border and of width MaskWidth/2 at the left and right border. For the same reason, the maximum
disparity range is reduced at the left and right image border.
Since matching turns out to be highly unreliable when dealing with poorly textured areas, the minimum variance
within the correlation window can be defined in TextureThresh. This threshold is applied on both input images
ImageRect1 and ImageRect2. In addition, ScoreThresh guarantees the matching quality and defines the
maximum (’sad’,’ssd’) or, respectively, minimum (’ncc’) score value of the correlation function. Setting Filter
to ’left_right_check’, moreover, increases the robustness of the returned matches, as the result relies on a concurrent
direct and reverse match, whereas ’none’ switches it off.
HALCON 24.11.1.0
262 CHAPTER 5 3D RECONSTRUCTION
The number of pyramid levels used to improve the time response of binocular_distance is determined by
NumLevels. Following a coarse-to-fine scheme disparity images of higher levels are computed and segmented
into rectangular subimages to reduce the disparity range on the next lower pyramid level. TextureThresh and
ScoreThresh are applied on every level and the returned domain of the Distance and Score images arises
from the intersection of the resulting domains of every single level. Generally, pyramid structures are the more
advantageous the more the distance image can be segmented into regions of homogeneous distance values and the
bigger the disparity range must be specified. As a drawback, coarse pyramid levels might loose important texture
information which can result in deficient distance values.
Finally, the value ’interpolation’ for parameter SubDistance increases the refinement and accuracy of the dis-
tance values. It is switched off by setting the parameter to ’none’.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, binocular_distance does not support a stereo setup of mixed type. For stereo setups of mixed
type, please use reconstruct_surface_stereo, in which the reference coordinate system can be chosen
arbitrarily. Alternatively, binocular_disparity and disparity_image_to_xyz might be used.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte
Rectified image of camera 2.
. Distance (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Distance image.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Evaluation of a distance value.
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Matching function.
Default: ’ncc’
List of values: Method ∈ {’sad’, ’ssd’, ’ncc’}
. MaskWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the correlation window.
Default: 11
Suggested values: MaskWidth ∈ {5, 7, 9, 11, 21}
Restriction: 3 <= MaskWidth && odd(MaskWidth)
. MaskHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the correlation window.
Default: 11
Suggested values: MaskHeight ∈ {5, 7, 9, 11, 21}
Restriction: 3 <= MaskHeight && odd(MaskHeight)
. TextureThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer
Variance threshold of textured image regions.
Default: 0.0
Suggested values: TextureThresh ∈ {0.0, 2.0, 5.0, 10.0}
Restriction: 0.0 <= TextureThresh
Result
binocular_disparity returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an excep-
tion is raised.
HALCON 24.11.1.0
264 CHAPTER 5 3D RECONSTRUCTION
Execution Information
Compute the distance values for a rectified stereo image pair using multigrid methods.
binocular_distance_mg computes the distance values for a rectified stereo image pair using multi-
grid methods. The operator first calculates the disparities between two rectified images ImageRect1 and
ImageRect2 similar to binocular_disparity_mg. The resulting disparity values are then trans-
formed into distance values of the corresponding 3D world points to the rectified stereo camera system as in
disparity_to_distance. The distances are returned in the single-channel image Distance in which each
gray value represents the distance of the respective 3D world point to the stereo camera system. Different from
binocular_distance this operator uses a variational approach based on multigrid methods. This approach
returns distance values also for image parts that contain no texture.
The input images ImageRect1 and ImageRect2 must be a pair of rectified stereo images, i.e., corresponding
points must have the same row coordinate. In case this assumption is violated the images can be rectified by using
the operators calibrate_cameras, gen_binocular_rectification_map and map_image.
For the transformation of the disparity to the distance, the internal camera parameters of the rectified cam-
era 1 CamParamRect1 and of the rectified camera 2 CamParamRect2, as well as the relative pose of the
cameras RelPoseRect must be specified. The relative pose defines a point transformation from the recti-
fied camera system 2 to the rectified camera system 1. These parameters can be obtained from the operators
calibrate_cameras and gen_binocular_rectification_map.
A detailed description of the algorithm and of the remaining parameters can be found in the documentation of
binocular_disparity_mg.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, binocular_distance_mg does not support a stereo setup of mixed type. For stereo setups of
mixed type, please use reconstruct_surface_stereo, in which the reference coordinate system can be
chosen arbitrarily. Alternatively, binocular_disparity_mg and disparity_image_to_xyz might be
used.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
. ImageRect1 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 1.
. ImageRect2 (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2 / real
Rectified image of camera 2.
. Distance (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Distance image.
. Score (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : real
Score of the calculated disparity if CalculateScore is set to ’true’.
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters of the rectified camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. GrayConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gray value constancy in the data term.
Default: 1.0
Suggested values: GrayConstancy ∈ {0.0, 1.0, 2.0, 10.0}
Restriction: GrayConstancy >= 0.0
. GradientConstancy (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the gradient constancy in the data term.
Default: 30.0
Suggested values: GradientConstancy ∈ {0.0, 1.0, 5.0, 10.0, 30.0, 50.0, 70.0}
Restriction: GradientConstancy >= 0.0
. Smoothness (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Weight of the smoothness term in relation to the data term.
Default: 5.0
Suggested values: Smoothness ∈ {1.0, 3.0, 5.0, 10.0}
Restriction: Smoothness > 0.0
. InitialGuess (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Initial guess of the disparity.
Default: 0.0
Suggested values: InitialGuess ∈ {-30.0, -20.0, -10.0, 0.0, 10.0, 20.0, 30.0}
. CalculateScore (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should the quality measure be returned in Score?
Default: ’false’
Suggested values: CalculateScore ∈ {’true’, ’false’}
. MGParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
Parameter name(s) for the multigrid algorithm.
Default: ’default_parameters’
List of values: MGParamName ∈ {’default_parameters’, ’mg_solver’, ’mg_cycle_type’, ’mg_pre_relax’,
’mg_post_relax’, ’initial_level’, ’pyramid_factor’, ’iterations’}
. MGParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / real / integer
Parameter value(s) for the multigrid algorithm.
Default: ’fast_accurate’
Suggested values: MGParamValue ∈ {’very_accurate’, ’accurate’, ’fast_accurate’, ’fast’, ’v’, ’w’, ’none’,
’gauss_seidel’, ’multigrid’, ’full_multigrid’, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 0.1, 0.2, 0.3,
0.4, 0.5, 0.6, 0.7, 0.8, 0.9, -1, -2, -3, -4, -5}
Result
If the parameter values are correct, binocular_distance_mg returns the value 2 (H_MSG_TRUE).
If the input is empty (no input images are available) the behavior can be set via set_system
(’no_object_result’,<Result>). If necessary, an exception is raised.
HALCON 24.11.1.0
266 CHAPTER 5 3D RECONSTRUCTION
Execution Information
Compute the distance values for a rectified stereo image pair using multi-scanline optimization.
binocular_distance_ms computes the distance values for a rectified stereo image pair using multi-
scanline optimization. The operator first calculates the disparities between two rectified images ImageRect1
and ImageRect2 similar to binocular_disparity_ms. The resulting disparity values are then trans-
formed into distance values of the corresponding 3D world points to the rectified stereo camera system as in
disparity_to_distance. The distances are returned in the single-channel image Distance in which each
gray value represents the distance of the respective 3D world point to the stereo camera system.
binocular_disparity_ms requires a reference image ImageRect1 and a search image ImageRect2
which both must be rectified, i.e., corresponding pixels must have the same row coordinate. If this
assumption is violated, the images can be rectified by using the operators calibrate_cameras,
gen_binocular_rectification_map, and map_image.
For the transformation of the disparity to the distance, the internal camera parameters of the rectified cam-
era 1 CamParamRect1 and of the rectified camera 2 CamParamRect2, as well as the relative pose of the
cameras RelPoseRect must be specified. The relative pose defines a point transformation from the recti-
fied camera system 2 to the rectified camera system 1. These parameters can be obtained from the operators
calibrate_cameras and gen_binocular_rectification_map.
A detailed description of the remaining parameters can be found in the documentation of
binocular_disparity_ms.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, binocular_distance_ms does not support a stereo setup of mixed type. For stereo setups of
mixed type, please use reconstruct_surface_stereo, in which the reference coordinate system can be
HALCON 24.11.1.0
268 CHAPTER 5 3D RECONSTRUCTION
Example
Result
The operator disparity_image_to_xyz returns the value 2 (H_MSG_TRUE) if the input is not empty.
The behavior in case of empty input (no input image available) is set via the operator set_system
(’no_object_result’,<Result>). The behavior in case of empty region (the region is the empty set)
is set via set_system(’empty_region_result’,<Result>). If necessary an exception is raised.
Execution Information
Transform a disparity value into a distance value in a rectified binocular stereo system.
disparity_to_distance transforms a disparity value into a distance of an object point to the binocular
stereo system. The cameras of this system must be rectified and are defined by the rectified internal parameters
CamParamRect1 of camera 1 and CamParamRect2 of camera 2, and the external parameters RelPoseRect.
Latter specifies the relative pose of both cameras to each other by defining a point transformation from rec-
tified camera system 2 to rectified camera system 1. These parameters can be obtained from the operator
calibrate_cameras and gen_binocular_rectification_map. The disparity value Disparity
is defined by the column difference of the image coordinates of two corresponding points on an epipolar line ac-
cording to the equation d = c2 − c1 (see also binocular_disparity). This value characterises a set of 3D
object points of an equal distance to a plane being parallel to the rectified image plane of the stereo system. The
distance to the subset plane z = 0 which is parallel to the rectified image plane and contains the optical centers of
both cameras is returned in Distance.
Attention
If using cameras with telecentric lenses, the Distance is not defined as the distance of a point to the camera
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For a stereo setup of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
HALCON 24.11.1.0
270 CHAPTER 5 3D RECONSTRUCTION
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, disparity_to_distance does not support stereo setups of mixed type. For stereo setups of mixed
type, disparity_to_point_3d should be used instead.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 2 to the rectified camera 1.
Number of elements: 7
. Disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Disparity between the images of the world point.
. Distance (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Distance of a world point to the rectified camera system.
Result
disparity_to_distance returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information
Transform an image point and its disparity into a 3D point in a rectified stereo system.
Given an image point of the rectified camera 1, specified by its image coordinates (Row1,Col1), and its disparity in
a rectified binocular stereo system, disparity_to_point_3d computes the corresponding three dimensional
object point. The disparity value Disparity defines the column difference of the image coordinates of two
corresponding features on an epipolar line according to the equation d = c2 − c1 . The rectified binocular camera
system is specified by its internal camera parameters CamParamRect1 of camera 1 and CamParamRect2 of
camera 2, and the external parameters RelPoseRect defining the pose of the rectified camera 2 in relation to
the rectified camera 1. These camera parameters can be obtained from the operators calibrate_cameras and
gen_binocular_rectification_map. The 3D point is returned in Cartesian coordinates (X,Y,Z) of the
rectified camera system 1.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported.
Parameters
. CamParamRect1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 1.
. CamParamRect2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal camera parameters of camera 2.
. RelPoseRect (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Pose of the rectified camera 2 in relation to the rectified camera 1.
Number of elements: 7
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate of a point in the rectified image 1.
. Col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate of a point in the rectified image 1.
. Disparity (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Disparity of the images of the world point.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the 3D point.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the 3D point.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the 3D point.
Result
disparity_to_point_3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information
HALCON 24.11.1.0
272 CHAPTER 5 3D RECONSTRUCTION
but as the distance from the point to the plane, defined by the y-axes of both cameras and their baseline (see
gen_binocular_rectification_map).
For stereo setups of mixed type (i.e., for a stereo setup in which one of the original cameras is a perspective camera
and the other camera is a telecentric camera; see gen_binocular_rectification_map), the rectifying
plane of the two cameras is in a position with respect to the object that would lead to very unintuitive distances.
Therefore, distance_to_disparity does not support stereo setups of mixed type.
Additionally, stereo setups that contain cameras with and without hypercentric lenses at the same time are not
supported.
Parameters
Image coordinates result from 3D direction vectors by multiplication with the camera matrix CamM at:
col X
row = CamM at · Y .
1 1
Therefore, the fundamental matrix FMatrix is calculated from the essential matrix EMatrix and the camera
matrices CamMat1, CamMat2 by the following formula:
The transformation of the essential matrix to the fundamental matrix goes along with the propagation of the co-
variance matrices CovEMat to CovFMat. If CovEMat is empty CovFMat will be empty too.
The conversion operator essential_to_fundamental_matrix is used especially for a subsequent visual-
ization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameters
Possible Predecessors
vector_to_essential_matrix
Alternatives
rel_pose_to_fundamental_matrix
Module
3D Metrology
HALCON 24.11.1.0
274 CHAPTER 5 3D RECONSTRUCTION
dimensions of the images must be provided in Width1, Height1, Width2, Height2. After rectification the
fundamental matrix is always of the canonical form
0 0 0
0 0 −1 .
0 1 0
In the case of a known covariance matrix CovFMat of the fundamental matrix FMatrix, the covariance matrix
CovFMatRect of the above rectified fundamental matrix is calculated. This can help for an improved stereo
matching process because the covariance matrix defines in terms of probabilities the image domain where to find
a corresponding match.
Similar to the operator gen_binocular_rectification_map the output images Map1 and Map2 describe
the transformation, also called mapping, of the original images to the rectified ones. The parameter Mapping
specifies whether bilinear interpolation (’bilinear_map’) should be applied between the pixels in the input image
or whether the gray value of the nearest neighboring pixel should be taken (’nn_map’). The size and resolution
of the maps and of the transformed images can be adjusted by the parameter SubSampling, which applies a
sub-sampling factor to the original images. For example, a factor of two will halve the image sizes. If just the two
homographies are required Mapping can be set to ’no_map’ and no maps will be returned. For speed reasons,
this option should be used if for a specific stereo configuration the images must be rectified only once. If the stereo
setup is fixed, the maps should be generated only once and both images should be rectified with map_image; this
will result in the smallest computational cost for on-line rectification.
When using the maps, the transformed images are of the same size as their maps. Each pixel in the map contains
the description of how the new pixel at this position is generated. The images Map1 and Map2 are single channel
images if Mapping is set to ’nn_map’ and five channel images if it is set to ’bilinear_map’. In the first channel,
which is of type int4, the pixels contain the linear coordinates of their reference pixels in the original image. With
Mapping equal to ’no_map’ this reference pixel is the nearest neighbor to the back-transformed pixel coordinates
of the map. In the case of bilinear interpolation the reference pixel is the next upper left pixel relative to the back-
transformed coordinates. The following scheme shows the ordering of the pixels in the original image next to the
back-transformed pixel coordinates, where the reference pixel takes the number 2.
2 3
4 5
The channels 2 to 5, which are of type uint2, contain the weights of the relevant pixels for the bilinear interpolation.
Based on the rectified images, the disparity be computed using binocular_disparity. In contrast to stereo
with fully calibrated cameras, using the operator gen_binocular_rectification_map and its succes-
sors, metric depth information can not be derived for weakly calibrated cameras. The disparity map gives just a
qualitative depth ordering of the scene.
Parameters
. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : int4 / uint2
Image coding the rectification of the 1. image.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : int4 / uint2
Image coding the rectification of the 2. image.
. FMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real / integer
Fundamental matrix.
. CovFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
9 × 9 covariance matrix of the fundamental matrix.
Default: []
. Width1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Width of the 1. image.
Default: 512
Suggested values: Width1 ∈ {128, 256, 512, 1024}
Restriction: Width1 > 0
. Height1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Height of the 1. image.
Default: 512
Suggested values: Height1 ∈ {128, 256, 512, 1024}
Restriction: Height1 > 0
HALCON 24.11.1.0
276 CHAPTER 5 3D RECONSTRUCTION
Execution Information
Generate transformation maps that describe the mapping of the images of a binocular camera pair to a common
rectified image plane.
Given a pair of stereo images, rectification determines a transformation of each image plane in a way that
pairs of conjugate epipolar lines become collinear and parallel to the horizontal image axes. This is required
for an efficient calculation of disparities or distances with operators such as binocular_disparity or
binocular_distance. The rectified images can be thought of as acquired by a new stereo rig, obtained
by rotating and, in case of telecentric area scan and line scan cameras, translating the original cameras. The projec-
tion centers (i.e., in the telecentric case, the direction of the optical axes) are maintained. For perspective cameras,
the image planes are additionally transformed into a common plane, which means that the focal lengths are set
equal, and the optical axes are parallel. For a stereo setup of mixed type (i.e., one perspective and one telecentric
camera), the image planes are also transformed into a common plane, as described below.
To achieve the transformation map for rectified images gen_binocular_rectification_map requires
the internal camera parameters CamParam1 of camera 1 and CamParam2 of camera 2, as well as the relative
pose RelPose, ccs1 Pccs2 , defining a point transformation from camera coordinate system 2 (ccs2) into camera
coordinate system 1 (ccs1), see Transformations / Poses and “Solution Guide III-C - 3D Vision”.
These parameters can be obtained, e.g., from the operator calibrate_cameras.
The internal camera parameters, modified by the rectification, are returned in CamParamRect1 for camera 1 and
CamParamRect2 for camera 2, respectively. The rotation and, in case of telecentric cameras, translation of the
rectified camera in relation to the original camera is specified by CamPoseRect1 and CamPoseRect2, respec-
tively. These poses are in the form ccsX PccsRX with ccsX: camera coordinate system of camera X and ccsRX:
camera coordinate system of camera X for the rectified image. Finally, RelPoseRect returns ccsR1 PccsR2 , the
relative pose of the rectified camera coordinate system 2 (ccsR2) relative to the rectified camera coordinate system
1 (ccsR1).
Rectification Method
For perspective area scan cameras, RelPoseRect only has a translation in x. Generally, the transformations are
defined in a way that the rectified camera 1 is left of the rectified camera 2. This means that the optical center of
camera 2 has a positive x coordinate of the rectified coordinate system of camera 1.
The projection onto a common plane has many degrees of freedom, which are implicitly restricted by selecting a
certain method in Method:
• ’viewing_direction’ uses the baseline as the x-axis of the common image plane. The mean of the viewing
directions (z-axes) of the two cameras is used to span the x-z plane of the rectified system. The resulting
rectified z-axis is the orientation of the common image plane and as such located in this plane and orthogonal
to the baseline. In many cases, the resulting rectified z-axis will not differ much from the mean of the two old
z-axes. The new focal length is determined in such a way that the old principal points have the same distance
to the new common image plane. The different z-axes directions are illustrated in the schematic below.
z1 z2
z1 z2
_
_ z z_
z_
_
z
_
(1) (2)
Illustration for the different z-axes directions using ’viewing_direction’. (1): View facing the base line (in
orange). (2): View along the base line (pointing into the page, in orange).
• ’geometric’ specifies the orientation of the common image plane by the cross product of the baseline and the
line of intersection of the original image planes. The new focal length is determined in such a way that the
old principal points have the same distance to the new common image plane.
For telecentric area scan and line scan cameras, the parameter Method is ignored. The relative pose of both
cameras is not uniquely defined in such a system since the cameras return identical images no matter how they
are translated along their optical axis. Yet, in order to define an absolute distance measurement to the cameras, a
standard position of both cameras is considered. This position is defined as follows: Both cameras are translated
along their optical axes until their distance is one meter and until the line between the cameras (baseline) forms the
same angle with both optical axes (i.e., the baseline and the optical axes form an isosceles triangle). The optical
axes remain unchanged. The relative pose of the rectified cameras RelPoseRect may be different from the
relative pose of the original cameras RelPose.
For a stereo setup of mixed type (i.e., one perspective and one telecentric camera), the parameter Method is
ignored. The rectified image plane is determined uniquely from the geometry of the perspective camera and the
relative pose of the two cameras. The normal of the rectified image plane is the vector that points from the
projection center of the perspective camera to the point on the optical axis of the telecentric camera that has the
shortest distance from the projection center of the perspective camera. This is also the z-axis of the rectified
perspective camera. The geometric base of the mixed camera system is a line that passes through the projection
center of the perspective camera and has the same direction as the z-axis of the telecentric camera, i.e., the base
is parallel to the viewing direction of the telecentric camera. The x-axis of the rectified perspective camera is
given by the base and the y-axis is constructed to form a right-handed coordinate system. To rectify the telecentric
camera, its optical axis must be shifted to the base and the image plane must be tilted by 90◦ or −90◦ . To
achieve this, a special type of object-side telecentric camera that is able to handle this special rectification geometry
(indicated by a negative image plane distance ImagePlaneDist) must be used for the rectified telecentric
camera. The representation of this special camera type should be regarded as a black box because it is used only
for rectification purposes in HALCON (for this reason, it is not documented in camera_calibration). The
rectified telecentric camera has the same orientation as the original telecentric camera, while its origin is translated
to a point on the base.
HALCON 24.11.1.0
278 CHAPTER 5 3D RECONSTRUCTION
Rectification Maps
The mapping functions for the images of camera 1 and camera 2 are returned in the images Map1 and Map2.
MapType is used to specify the type of the output maps. If ’nearest_neighbor’ is chosen, both maps consist of one
image containing one channel, in which for each pixel of the resulting image the linearized coordinate of the pixel
of the input image is stored that is the nearest neighbor to the transformed coordinates. If ’bilinear’ interpolation
is chosen, both maps consists of one image containing five channels. In the first channel for each pixel in the
resulting image the linearized coordinates of the pixel in the input image is stored that is in the upper left position
relative to the transformed coordinates. The four other channels contain the weights of the four neighboring pixels
of the transformed coordinates which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates. If ’coord_map_sub_pix’ is chosen, both maps consist of one vector field image, in which for
each pixel of the resulting image the subpixel precise coordinates in the input image are stored.
The size and resolution of the maps and of the transformed images can be adjusted by the SubSampling param-
eter which applies a sub-sampling factor to the original images.
If you want to re-use the created map in another program, you can save it as a multi-channel image with the
operator write_image, using the format ’tiff’.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported.
Parameters
. Map1 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / uint2 / vector_field
Image containing the mapping data of camera 1.
. Map2 (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / uint2 / vector_field
Image containing the mapping data of camera 2.
. CamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of camera 1.
. CamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of camera 2.
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from camera 2 to camera 1.
Number of elements: 7
. SubSampling (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Subsampling factor.
Default: 1.0
Suggested values: SubSampling ∈ {0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0}
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of rectification.
Default: ’viewing_direction’
List of values: Method ∈ {’viewing_direction’, ’geometric’}
. MapType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of mapping.
Default: ’bilinear’
List of values: MapType ∈ {’nearest_neighbor’, ’bilinear’, ’coord_map_sub_pix’}
. CamParamRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal parameters of camera 1.
. CamParamRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Rectified internal parameters of camera 2.
. CamPoseRect1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements: 7
. CamPoseRect2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from the rectified camera 1 to the original camera 1.
Number of elements: 7
Result
gen_binocular_rectification_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
280 CHAPTER 5 3D RECONSTRUCTION
Get a 3D point from the intersection of two lines of sight within a binocular camera system.
Given two lines of sight from different cameras, specified by their image points (Row1,Col1) of camera 1 and
(Row2,Col2) of camera 2, intersect_lines_of_sight computes the 3D point of intersection of these
lines. The binocular camera system is specified by its internal camera parameters CamParam1 of the projective
camera 1 and CamParam2 of the projective camera 2, and the external parameters RelPose. Latter one is of the
form ccs1 Pccs2 and characterizes the relative pose of both cameras to each other, thus defining a point transforma-
tion from camera coordinate system 2 (ccs2) into camera coordinate system 1 (ccs1), see Transformations / Poses
and “Solution Guide III-C - 3D Vision”. These camera parameters can be obtained, e.g., from the
operator calibrate_cameras, if the coordinates of the image points (Row1,Col1) and (Row2,Col2) re-
fer to the respective original image coordinate system. In case of rectified image coordinates ( e.g., obtained
from rectified images), the rectified camera parameters must be passed, as they are returned by the operator
gen_binocular_rectification_map. The ’point of intersection’ is defined by the point with the shortest
distance to both lines of sight. This point is returned in Cartesian coordinates (X,Y,Z) of camera system 1 and its
distance to the lines of sight is passed in Dist.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported.
Parameters
. CamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of the projective camera 1.
. CamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal parameters of the projective camera 2.
. RelPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
Point transformation from camera 2 to camera 1.
Number of elements: 7
. Row1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate of a point in image 1.
. Col1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate of a point in image 1.
. Row2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate of the corresponding point in image 2.
. Col2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate of the corresponding point in image 2.
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinate of the 3D point.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinate of the 3D point.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinate of the 3D point.
. Dist (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Distance of the 3D point to the lines of sight.
Result
intersect_lines_of_sight returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an
exception is raised.
Execution Information
See also
disparity_to_point_3d
Module
3D Metrology
Compute the essential matrix for a pair of stereo images by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2 along with known internal camera parameters, specified by the camera matrices CamMat1
and CamMat2, match_essential_matrix_ransac automatically determines the geometry of the stereo
setup and finds the correspondences between the characteristic points. The geometry of the stereo setup is repre-
sented by the essential matrix EMatrix and all corresponding points have to fulfill the epipolar constraint.
The operator match_essential_matrix_ransac is designed to deal with a linear camera model. The
internal camera parameters are passed by the arguments CamMat1 and CamMat2, which are 3×3 upper triangular
matrices describing an affine transformation. The relation between a vector (X,Y,1), representing the direction from
the camera to the viewed 3D space point and its (projective) 2D image coordinates (col,row,1) is:
col X f /sx s cx
row = CamM at · Y where CamM at = 0 f /sy cy .
1 1 0 0 1
Note the column/row ordering in the point coordinates which has to be compliant with the x/y notation of the
camera coordinate system. The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor
and (cx , cy ) indicates the principal point. Mainly, these are the elements known from the camera parameters as
used for example in calibrate_cameras. Alternatively, the elements of the camera matrix can be described
in a different way, see e.g. stationary_camera_self_calibration. Multiplied by the inverse of the
camera matrices the direction vectors in 3D space are obtained from the (projective) image coordinates. For known
camera matrices the epipolar constraint is given by:
T
X2 X1
Y2 · EM atrix · Y1 = 0 .
1 1
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the essential matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be se-
lected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’
means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer to
binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matching operations can be limited. Only points
within a window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of
the search window in the second image with respect to the position of the current point in the first image is given
by RowMove and ColMove.
HALCON 24.11.1.0
282 CHAPTER 5 3D RECONSTRUCTION
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the essen-
tial matrix EMatrix. It tries to find the essential matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns the
covariance of the essential matrix CovEMat as well. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-
linear-transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences
differ depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean Euclidean distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_essential_matrix_ransac a special configuration of scene points and cameras
exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution
in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that the output parameters EMatrix, CovEMat and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system)
does not affect the results of match_essential_matrix_ransac.
Parameters
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 1.
Restriction: length(Rows1) >= 6 || length(Rows1) >= 3
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 1.
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 2.
Restriction: length(Rows2) >= 6 || length(Rows2) >= 3
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 2.
Restriction: length(Cols2) == length(Rows2)
HALCON 24.11.1.0
284 CHAPTER 5 3D RECONSTRUCTION
match_fundamental_matrix_distortion_ransac ( Image1,
Image2 : : Rows1, Cols1, Rows2, Cols2, GrayMatchMethod,
MaskSize, RowMove, ColMove, RowTolerance, ColTolerance, Rotation,
MatchThreshold, EstimationMethod, DistanceThreshold,
RandSeed : FMatrix, Kappa, Error, Points1, Points2 )
Compute the fundamental matrix and the radial distortion coefficient for a pair of stereo images by automatically
finding correspondences between image points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2)
in the stereo images Image1 and Image2, which must be of identical size,
match_fundamental_matrix_distortion_ransac automatically finds the correspondences be-
tween the characteristic points and determines the geometry of the stereo setup. For unknown cameras the
geometry of the stereo setup is represented by the fundamental matrix FMatrix and the radial distortion
coefficient Kappa (κ). All corresponding points must fulfill the epipolar constraint:
T
c2 c1
r2 · FMatrix · r1 = 0 .
1 1
Here, (r1 , c1 ) and (r2 , c2 ) denote image points that are obtained by undistorting the input image points with the
division model (see Calibration):
r̃ c̃
r= c=
1 + κ(r̃2 + c̃2 ) 1 + κ(r̃2 + c̃2 )
denote the distorted image points, specified relative to the image center, and w and h denote the width and height of
the input images. Thus, match_fundamental_matrix_distortion_ransac assumes that the principal
point of the camera, i.e., the center of the radial distortions, lies at the center of the image.
The returned Kappa can be used to construct camera parameters that can be used to rectify images or
points (see change_radial_distortion_cam_par, change_radial_distortion_image, and
change_radial_distortion_points):
Note the column/row ordering in the point coordinates above: since the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation must be compliant with the camera
coordinate system. Therefore, (x,y) coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the fundamental matrix and radial distortion coefficient that maximizes the number of
correspondences under the epipolar constraint.
The size of the mask windows used for the matching is MaskSize×MaskSize. Three metrics for the correlation
can be selected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used,
’sad’ means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer
to binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A matching thus found is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm the search area for the match candidates can be limited to a rectangle by
specifying its size and offset. Only points within a window of 2 · RowTolerance × 2 · ColTolerance points
are considered. The offset of the center of the search window in the second image with respect to the position of
the current point in the first image is given by RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera, the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate, the matching will
typically fail. In this case, an angle interval should be specified and Rotation is a tuple with two elements. The
larger the given interval is the slower is the operator is since the RANSAC algorithm is run over all (automatically
determined) angle increments within the interval.
After the initial matching has been completed, a randomized search algorithm (RANSAC) is used to determine the
fundamental matrix FMatrix and the radial distortion coefficient Kappa. It tries to find the parameters that are
consistent with a maximum number of correspondences. For a point to be accepted, the distance in pixels to its
corresponding epipolar line must not exceed the threshold DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a spe-
cial type and which algorithm is to be applied for its computation. If EstimationMethod is either ’lin-
ear’ or ’gold_standard’, the relative orientation is arbitrary. If the left and right cameras are identical and the
relative orientation between them is a pure translation, EstimationMethod can be set to ’trans_linear’ or
’trans_gold_standard’. The typical application for this special motion case is the scenario of a single fixed cam-
era looking onto a moving conveyor belt. In order to get a unique solution for the correspondence problem, the
minimum required number of corresponding points is nine in the general case and four in the special translational
case.
The fundamental matrix is computed by a linear algorithm if EstimationMethod is set to ’linear’ or
’trans_linear’. This algorithm is very fast. For the pure translation case (EstimationMethod = ’trans_linear’),
the linear method returns accurate results for small to moderate noise of the point coordinates and for
most distortions (except for very small distortions). For a general relative orientation of the two cameras
(EstimationMethod = ’linear’), the linear method only returns accurate results for very small noise of
the point coordinates and for sufficiently large distortions. For EstimationMethod = ’gold_standard’ or
’trans_gold_standard’, a mathematically optimal but slower optimization is used, which minimizes the geometric
reprojection error of reconstructed projective 3D points. For a general relative orientation of the two cameras, in
general EstimationMethod = ’gold_standard’ should be selected.
HALCON 24.11.1.0
286 CHAPTER 5 3D RECONSTRUCTION
The value Error indicates the overall quality of the estimation procedure and is the mean symmetric Euclidean
distance in pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the above constraints are considered to be corresponding points. Points1 contains the
indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to
obtain reproducible results. If RandSeed is set to a positive number, the operator returns the same result on every
call with the same parameters because the internally used random number generator is initialized with RandSeed.
If RandSeed = 0, the random number generator is initialized with the current time. In this case the results may
not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system) does not
affect the results of match_fundamental_matrix_distortion_ransac.
Parameters
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 9 || length(Rows1) >= 4
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) >= 9 || length(Rows2) >= 4
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows2)
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Gray value match metric.
Default: ’ncc’
List of values: GrayMatchMethod ∈ {’ncc’, ’ssd’, ’sad’}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of gray value masks.
Default: 10
Suggested values: MaskSize ∈ {3, 7, 15}
Value range: 1 ≤ MaskSize
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average row coordinate offset of corresponding points.
Default: 0
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average column coordinate offset of corresponding points.
Default: 0
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half height of matching search window.
Default: 200
Restriction: RowTolerance >= 1
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half width of matching search window.
Default: 200
Restriction: ColTolerance >= 1
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real / integer
Estimate of the relative rotation of the second image with respect to the first image.
Default: 0.0
Suggested values: Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
Execution Information
HALCON 24.11.1.0
288 CHAPTER 5 3D RECONSTRUCTION
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_fundamental_matrix_distortion, change_radial_distortion_cam_par,
change_radial_distortion_image, change_radial_distortion_points,
gen_binocular_proj_rectification
See also
match_fundamental_matrix_ransac, match_essential_matrix_ransac,
match_rel_pose_ransac, proj_match_points_ransac, calibrate_cameras
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology
Compute the fundamental matrix for a pair of stereo images by automatically finding correspondences between
image points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo images
Image1 and Image2, match_fundamental_matrix_ransac automatically finds the correspondences
between the characteristic points and determines the geometry of the stereo setup. For unknown cameras the
geometry of the stereo setup is represented by the fundamental matrix FMatrix and all corresponding points
have to fulfill the epipolar constraint, namely:
T
Cols2 Cols1
Rows2 · FMatrix · Rows1 = 0 .
1 1
Note the column/row ordering in the point coordinates: because the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation has to be compliant with the camera
coordinate system. So, (x,y) coordinates correspond to (column,row) pairs.
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an initial
matching between them is generated using the similarity of the windows in both images. Then, the RANSAC algo-
rithm is applied to find the fundamental matrix that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be se-
lected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’
means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer to
binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm the search area for the matching operations can be limited. Only points
within a window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of
the search window in the second image with respect to the position of the current point in the first image is given
by RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the fun-
damental matrix FMatrix. It tries to find the matrix that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. If left and right camera are identical and the relative orien-
tation between them is a pure translation then choose EstimationMethod equal to ’trans_normalized_dlt’ or
’trans_gold_standard’. The typical application for this special motion case is the scenario of a single fixed camera
looking onto a moving conveyor belt. In order to get a unique solution in the correspondence problem the min-
imum required number of corresponding points is eight in the general case and three in the special, translational
case.
The fundamental matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as
well the covariance of the fundamental matrix CovFMat. Here, ’normalized_dlt’ and ’gold_standard’ stand for
direct-linear-transformation and gold-standard-algorithm respectively.
The value Error indicates the overall quality of the estimation procedure and is the mean Euclidean distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system)
does not affect the results of match_fundamental_matrix_ransac.
Parameters
HALCON 24.11.1.0
290 CHAPTER 5 3D RECONSTRUCTION
Execution Information
Compute the relative orientation between two cameras by automatically finding correspondences between image
points.
Given a set of coordinates of characteristic points (Rows1, Cols1) and (Rows2, Cols2) in the stereo
images Image1 and Image2 along with known internal camera parameters CamPar1 and CamPar2,
match_rel_pose_ransac automatically determines the geometry of the stereo setup and finds the corre-
spondences between the characteristic points. The geometry of the stereo setup is represented by the relative
pose RelPose and all corresponding points have to fulfill the epipolar constraint. RelPose indicates the rel-
ative pose of camera 1 with respect to camera 2 (See create_pose for more information about poses and
their representations.). This is in accordance with the explicit calibration of a stereo setup using the operator
calibrate_cameras. Now, let R, t be the rotation and translation of the relative pose. Then, the essential
matrix E is defined as E = ([t]× R)T , where [t]× denotes the 3 × 3 skew-symmetric matrix realizing the cross
product with the vector t. The pose can be determined from the epipolar constraint:
T
X2 X1 0 −tz ty
Y2 · ([t]× R)T · Y1 = 0 where [t]× = tz 0 −tx .
1 1 −ty tx 0
Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that
the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation
vector will always be normalized to unit length. As a consequence, a subsequent three-dimensional reconstruction
of the scene, using for instance vector_to_rel_pose, can be carried out only up to a single global scaling
factor.
The operator match_rel_pose_ransac is designed to deal with a camera model, that includes lens distor-
tions. This is in contrast to the operator match_essential_matrix_ransac, which encompasses only
straight line preserving cameras. The camera parameters are passed in CamPar1 and CamPar2. The 3D
direction vectors (X1 , Y1 , 1) and (X2 , Y2 , 1) are calculated from the point coordinates (Rows1,Cols1) and
(Rows2,Cols2) by inverting the process of projection (see Calibration).
HALCON 24.11.1.0
292 CHAPTER 5 3D RECONSTRUCTION
The matching process is based on characteristic points, which can be extracted with point operators like
points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value
correlations of mask windows around the input points in the first and the second image are determined and an ini-
tial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC
algorithm is applied to find the relative pose that maximizes the number of correspondences under the epipolar
constraint.
The size of the mask windows is MaskSize × MaskSize. Three metrics for the correlation can be se-
lected. If GrayMatchMethod has the value ’ssd’, the sum of the squared gray value differences is used, ’sad’
means the sum of absolute differences, and ’ncc’ is the normalized cross correlation. For details please refer to
binocular_disparity. The metric is minimized (’ssd’, ’sad’) or maximized (’ncc’) over all possible point
pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThreshold
(’ssd’, ’sad’) or above that value (’ncc’).
To increase the speed of the algorithm, the search area for the matching operations can be limited. Only points
within a window of 2 · RowTolerance × 2 · ColTolerance points are considered. The offset of the center of
the search window in the second image with respect to the position of the current point in the first image is given
by RowMove and ColMove.
If the second camera is rotated around the optical axis with respect to the first camera the parameter Rotation
may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality
of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will
typically fail. In this case, an angle interval should be specified, and Rotation is a tuple with two elements. The
larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments
within the interval.
After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the rel-
ative pose RelPose. It tries to find the relative pose that is consistent with a maximum number of correspon-
dences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold
DistanceThreshold.
The parameter EstimationMethod decides whether the relative orientation between the cameras is of a special
type and which algorithm is to be applied for its computation. If EstimationMethod is either ’normalized_dlt’
or ’gold_standard’ the relative orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’
means that the relative motion between the cameras is a pure translation. The typical application for this special
motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique
solution in the correspondence problem the minimum required number of corresponding points is six in the general
case and three in the special, translational case.
The relative pose is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen. With
’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result, and returns as well the
covariance of the relative pose CovRelPose. Here, ’normalized_dlt’ and ’gold_standard’ stand for direct-linear-
transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences differ
depending on the deployed estimation method.
The value Error indicates the overall quality of the estimation procedure and is the mean Euclidean distance in
pixels between the points and their corresponding epipolar lines.
Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1 contains
the indices of the matched input points from the first image and Points2 contains the indices of the corresponding
points in the second image.
For the operator match_rel_pose_ransac a special configuration of scene points and cameras exists: if all
3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the
essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the
operator. This means that the output parameters RelPose, CovRelPose and Error are of double length and
the values of the second solution are simply concatenated behind the values of the first one.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence
to obtain reproducible results. If RandSeed is set to a positive number the operator yields the same result on
every call with the same parameters because the internally used random number generator is initialized with the
RandSeed. If RandSeed = 0 the random number generator is initialized with the current time. In this case the
results may not be reproducible. The value set for the HALCON system variable ’seed_rand’ (see set_system)
does not affect the results of match_rel_pose_ransac.
Parameters
. Image1 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 1.
. Image2 (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image 2.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 1.
Restriction: length(Rows1) >= 6 || length(Rows1) >= 3
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 1.
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinates of characteristic points in image 2.
Restriction: length(Rows2) >= 6 || length(Rows2) >= 3
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinates of characteristic points in image 2.
Restriction: length(Cols2) == length(Rows2)
. CamPar1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Parameters of the 1st camera.
. CamPar2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Parameters of the 2nd camera.
. GrayMatchMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Gray value comparison metric.
Default: ’ssd’
List of values: GrayMatchMethod ∈ {’ssd’, ’sad’, ’ncc’}
. MaskSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Size of gray value masks.
Default: 10
Suggested values: MaskSize ∈ {3, 7, 15}
Value range: 1 ≤ MaskSize
. RowMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average row coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ RowMove ≤ 200
. ColMove (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Average column coordinate shift of corresponding points.
Default: 0
Value range: 0 ≤ ColMove ≤ 200
. RowTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half height of matching search window.
Default: 200
Value range: 1 ≤ RowTolerance
. ColTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Half width of matching search window.
Default: 200
Value range: 1 ≤ ColTolerance
. Rotation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.rad(-array) ; real / integer
Estimate of the relative orientation of the right image with respect to the left image.
Default: 0.0
Suggested values: Rotation ∈ {0.0, 0.1, -0.1, 0.7854, 1.571, 3.142}
. MatchThreshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Threshold for gray value matching.
Default: 10
Suggested values: MatchThreshold ∈ {10, 20, 50, 100, 0.9, 0.7}
HALCON 24.11.1.0
294 CHAPTER 5 3D RECONSTRUCTION
Possible Predecessors
points_foerstner, points_harris
Possible Successors
vector_to_rel_pose, gen_binocular_rectification_map
See also
binocular_calibration, match_fundamental_matrix_ransac,
match_essential_matrix_ransac, create_pose
References
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press,
Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation
of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.
Module
3D Metrology
and the projective coordinates are returned by the four-vector (X,Y,Z,W). This type of reconstruction is also known
as projective triangulation. If additionally the covariances CovRR1, CovRC1, CovCC1 and CovRR2, CovRC2,
CovCC2 of the image points are given the covariances of the reconstructed points CovXYZW are computed too.
Let n be the number of points. Then the concatenated covariances are stored in a 16 × n tuple. The computation
of the covariances is more precise if the covariance of the fundamental matrix CovFMat is provided.
The operator reconst3d_from_fundamental_matrix is typically used after
match_fundamental_matrix_ransac to perform 3d reconstruction. This will save computational
cost compared with the deployment of vector_to_fundamental_matrix.
reconst3d_from_fundamental_matrix is the projective equivalent to the Euclidean reconstruction op-
erator intersect_lines_of_sight.
Parameters
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 1 (row coordinate).
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 1 (column coordinate).
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 2 (row coordinate).
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input points in image 2 (column coordinate).
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Covariance of the points in image 2.
Default: []
. CovCC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinate variance of the points in image 2.
Default: []
. FMatrix (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d ; real
Fundamental matrix.
. CovFMat (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
9 × 9 covariance matrix of the fundamental matrix.
Default: []
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
X coordinates of the reconstructed points in projective 3D space.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Y coordinates of the reconstructed points in projective 3D space.
. Z (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Z coordinates of the reconstructed points in projective 3D space.
. W (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
W coordinates of the reconstructed points in projective 3D space.
. CovXYZW (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Covariance matrices of the reconstructed points.
Execution Information
HALCON 24.11.1.0
296 CHAPTER 5 3D RECONSTRUCTION
Compute the fundamental matrix from the relative orientation of two cameras.
Cameras including lens distortions can be modeled by the following set of parameters: the focal length f , two
scaling factors sx , sy , the coordinates of the principal point (cx , cy ) and the distortion coefficient κ. For a more
detailed description see the chapter Calibration. Only cameras with a distortion coefficient equal to zero project
straight lines in the world onto straight lines in the image. This is also true for telecentric cameras and for cameras
with tilt lenses. rel_pose_to_fundamental_matrix handles telecentric lenses and tilt lenses correctly.
However, for reasons of simplicity, these lens types are ignored in the formulas below. If the distortion coefficient
is equal to zero, image projection is a linear mapping and the camera, i.e., the set of internal parameters, can be
described by the camera matrix CamM at:
f /sx 0 cx
CamM at = 0 f /sy cy .
0 0 1
Going from a nonlinear model to a linear model is an approximation of the real underlying camera. For a variety of
camera lenses, especially lenses with long focal length, the error induced by this approximation can be neglected.
Following the formula E = ([t]× R)T , the essential matrix E is derived from the translation t and the rotation
R of the relative pose RelPose (see also operator vector_to_rel_pose). In the linearized framework the
fundamental matrix can be calculated from the relative pose and the camera matrices according to the formula
presented under essential_to_fundamental_matrix:
The transformation from a relative pose to a fundamental matrix goes along with the propagation of the covariance
matrices CovRelPose to CovFMat. If CovRelPose is empty CovFMat will be empty too.
The conversion operator rel_pose_to_fundamental_matrix is used especially for a subsequent visual-
ization of the epipolar line structure via the fundamental matrix, which depicts the underlying stereo geometry.
Parameters
Compute the essential matrix given image point correspondences and known camera matrices and reconstruct 3D
points.
For a stereo configuration with known camera matrices the geometric relation between the two images is de-
fined by the essential matrix. The operator vector_to_essential_matrix determines the essential matrix
EMatrix from in general at least six given point correspondences, that fulfill the epipolar constraint:
T
X2 X1
Y2 · EM atrix · Y1 = 0
1 1
The operator vector_to_essential_matrix is designed to deal only with a linear camera model. This is
in contrast to the operator vector_to_rel_pose, that encompasses lens distortions too. The internal camera
parameters are passed by the arguments CamMat1 and CamMat2, which are 3 × 3 upper triangular matrices
describing an affine transformation. The relation between the vector (X,Y,1), defining the direction from the
camera to the viewed 3D point, and its (projective) 2D image coordinates (col,row,1) is:
col X f /sx s cx
row = CamM at · Y where CamM at = 0 f /sy cy .
1 1 0 0 1
The focal length is denoted by f , sx , sy are scaling factors, s describes a skew factor and (cx , cy ) indicates
the principal point. Mainly, these are the elements known from the camera parameters as used for example in
calibrate_cameras. Alternatively, the elements of the camera matrix can be described in a different way,
see e.g. stationary_camera_self_calibration.
The point correspondences (Rows1,Cols1) and (Rows2,Cols2) are typically found by applying the operator
match_essential_matrix_ransac. Multiplying the image coordinates by the inverse of the camera ma-
trices results in the 3D direction vectors, which can then be inserted in the epipolar constraint.
The parameter Method decides whether the relative orientation between the cameras is of a special type and which
algorithm is to be applied for its computation. If Method is either ’normalized_dlt’ or ’gold_standard’ the relative
orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’ means that the relative motion
between the cameras is a pure translation. The typical application for this special motion case is the scenario
HALCON 24.11.1.0
298 CHAPTER 5 3D RECONSTRUCTION
of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of
corresponding points is just two instead of six in the general case.
The essential matrix is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen.
With ’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result. Here, ’normal-
ized_dlt’ and ’gold_standard’ stand for direct-linear-transformation and gold-standard-algorithm respectively. All
methods return the coordinates (X,Y,Z) of the reconstructed 3D points. The optimal methods also return the co-
variances of the 3D points in CovXYZ. Let n be the number of points then the 3 × 3 covariance matrices are
concatenated and stored in a tuple of length 9n. Additionally, the optimal methods return the covariance of the
essential matrix CovEMat.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization process and is the root-mean-square Euclidean
distance in pixels between the points and their corresponding epipolar lines.
For the operator vector_to_essential_matrix a special configuration of scene points and cameras exists:
if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution
in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by
the operator. This means that all output parameters are of double length and the values of the second solution are
simply concatenated behind the values of the first one.
Parameters
Compute the fundamental matrix given a set of image point correspondences and reconstruct 3D points.
For a stereo configuration with unknown camera parameters the geometric relation between the two images is
defined by the fundamental matrix. The operator vector_to_fundamental_matrix determines the fun-
damental matrix FMatrix from given point correspondences (Rows1,Cols1), (Rows2,Cols2), that fulfill the
epipolar constraint:
T
Cols2 Cols1
Rows2 · FMatrix · Rows1 = 0 .
1 1
HALCON 24.11.1.0
300 CHAPTER 5 3D RECONSTRUCTION
Note the column/row ordering in the point coordinates: since the fundamental matrix encodes the projective re-
lation between two stereo images embedded in 3D space, the x/y notation must be compliant with the camera
coordinate system. Therefore, (x,y) coordinates correspond to (column,row) pairs.
For a general relative orientation of the two cameras the minimum number of required point correspondences is
eight. Then, Method is chosen to be ’normalized_dlt’ or ’gold_standard’. If left and right camera are identical and
the relative orientation between them is a pure translation then choose Method equal to ’trans_normalized_dlt’
or ’trans_gold_standard’. In this special case the minimum number of correspondences is only two. The typical
application of the motion being a pure translation is that of a single fixed camera looking onto a moving conveyor
belt.
The fundamental matrix is determined by minimizing a cost function. To minimize the respective error different
algorithms are available, and the user can choose between the direct-linear-transformation (’normalized_dlt’) and
the gold-standard-algorithm (’gold_standard’). Like the motion case, the algorithm can be selected with the pa-
rameter Method. For Method = ’normalized_dlt’ or ’trans_normalized_dlt’, a linear algorithm minimizes an
algebraic error based on the above epipolar constraint. This algorithm offers a good compromise between speed
and accuracy. For Method = ’gold_standard’ or ’trans_gold_standard’, a mathematically optimal, but slower op-
timization is used, which minimizes the geometric backprojection error of reconstructed projective 3D points. In
this case, in addition to the fundamental matrix its covariance matrix CovFMat is output, along with the projective
coordinates (X,Y,Z,W) of the reconstructed points and their covariances CovXYZW. Let n be the number of points.
Then the concatenated covariances are stored in a 16 × n tuple.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization procedure and is the mean Euclidean distance
in pixels between the points and their corresponding epipolar lines.
If the correspondence between the points are not known, match_fundamental_matrix_ransac should be
used instead.
Parameters
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 8 || length(Rows1) >= 2
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 2.
Default: []
Compute the fundamental matrix and the radial distortion coefficient given a set of image point correspondences
and reconstruct 3D points.
HALCON 24.11.1.0
302 CHAPTER 5 3D RECONSTRUCTION
For a stereo configuration with unknown camera parameters, the geometric relation between the two images is de-
fined by the fundamental matrix. vector_to_fundamental_matrix_distortion determines the fun-
damental matrix FMatrix and the radial distortion coefficient Kappa (κ) from given point correspondences
(Rows1,Cols1), (Rows2,Cols2) that fulfill the epipolar constraint:
T
c2 c1
r2 · FMatrix · r1 = 0 .
1 1
Here, (r1 , c1 ) and (r2 , c2 ) denote image points that are obtained by undistorting the input image points with the
division model (see Calibration):
r̃ c̃
r= c=
1 + κ(r̃2 + c̃2 ) 1 + κ(r̃2 + c̃2 )
Note the column/row ordering in the point coordinates above: since the fundamental matrix encodes the projective
relation between two stereo images embedded in 3D space, the x/y notation must be compliant with the camera
coordinate system. Therefore, (x,y) coordinates correspond to (column,row) pairs.
For a general relative orientation of the two cameras, the minimum number of required point correspondences
is nine. Then, Method must be set to ’linear’ or ’gold_standard’. If the left and right cameras are identi-
cal and the relative orientation between them is a pure translation, Method must be set to ’trans_linear’ or
’trans_gold_standard’. In this special case, the minimum number of correspondences is only four. The typical
application of the motion being a pure translation is a single fixed camera looking onto a moving conveyor belt.
The fundamental matrix is determined by minimizing a cost function. To minimize the respective error, different
algorithms are available, and the user can choose between the linear (’linear’) and the gold-standard algorithm
(’gold_standard’). Like the motion type, the algorithm can be selected with the parameter Method. For Method
= ’linear’ or ’trans_linear’, a linear algorithm that minimizes an algebraic error based on the above epipolar
constraint is used. This algorithm is very fast. For the pure translation case (Method = ’trans_linear’), the
linear method returns accurate results for small to moderate noise of the point coordinates and for most distortions
(except for very small distortions). For a general relative orientation of the two cameras (Method = ’linear’),
the linear method only returns accurate results for very small noise of the point coordinates and for sufficiently
large distortions. For Method = ’gold_standard’ or ’trans_gold_standard’, a mathematically optimal but slower
optimization is used, which minimizes the geometric reprojection error of reconstructed projective 3D points. In
this case, in addition to the fundamental matrix and the distortion coefficient, the projective coordinates (X,Y,Z,W)
of the reconstructed points are returned. For a general relative orientation of the two cameras, in general Method
= ’gold_standard’ should be selected.
If an optimal gold-standard algorithm is chosen, the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated into the computation. They can be provided, for example, by
the operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
passed. In this case, the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization procedure and is the mean symmetric Euclidean
distance in pixels between the points and their corresponding epipolar lines.
HALCON 24.11.1.0
304 CHAPTER 5 3D RECONSTRUCTION
Compute the relative orientation between two cameras given image point correspondences and known camera
parameters and reconstruct 3D space points.
For a stereo configuration with known camera parameters the geometric relation between the two images is defined
by the relative pose. The operator vector_to_rel_pose computes the relative pose from in general at least
six point correspondences in the image pair. RelPose indicates the relative pose of camera 1 with respect to
camera 2 (see create_pose for more information about poses and their representations.). This is in accordance
with the explicit calibration of a stereo setup using the operator calibrate_cameras. Now, let R, t be the
rotation and translation of the relative pose. Then, the essential matrix E is defined as E = ([t]× R)T , where [t]×
denotes the 3 × 3 skew-symmetric matrix realizing the cross product with the vector t. The pose can be determined
from the epipolar constraint:
T
X2 X1 0 −tz ty
Y2 · ([t]× R)T · Y1 = 0 where [t]× = tz 0 −tx .
1 1 −ty tx 0
Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that
the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation
vector will always be normalized to unit length. As a consequence, a three-dimensional reconstruction of the
scene, here in terms of points given by their coordinates (X,Y,Z), can be carried out only up to a single global
scaling factor. If the absolute 3D coordinates of the reconstruction are to be achieved the unknown scaling factor
can be computed from a gauge, which has to be visible in both images. For example, a simple gauge can be given
by any known distance between points in the scene.
The operator vector_to_rel_pose is designed to deal with a camera model that includes lens distortions.
This is in contrast to the operator vector_to_essential_matrix, which encompasses only straight line
preserving cameras. The camera parameters are passed by the arguments CamPar1, CamPar2. The 3D
direction vectors (X1 , Y1 , 1) and (X2 , Y2 , 1) are calculated from the point coordinates (Rows1,Cols1) and
(Rows2,Cols2) by inverting the process of projection (see Calibration). The point correspondences are typi-
cally determined by applying the operator match_rel_pose_ransac.
The parameter Method decides whether the relative orientation between the cameras is of a special type and which
algorithm is to be applied for its computation. If Method is either ’normalized_dlt’ or ’gold_standard’ the relative
orientation is arbitrary. Choosing ’trans_normalized_dlt’ or ’trans_gold_standard’ means that the relative motion
between the cameras is a pure translation. The typical application for this special motion case is the scenario
of a single fixed camera looking onto a moving conveyor belt. In this case the minimum required number of
corresponding points is just two instead of six in the general case.
The relative pose is computed by a linear algorithm if ’normalized_dlt’ or ’trans_normalized_dlt’ is chosen. With
’gold_standard’ or ’trans_gold_standard’ the algorithm gives a statistically optimal result. Here, ’normalized_dlt’
and ’gold_standard’ stand for direct-linear-transformation and gold-standard-algorithm respectively. All methods
return the coordinates (X,Y,Z) of the reconstructed 3D points. The optimal methods also return the covariances of
the 3D points in CovXYZ. Let n be the number of points then the 3 × 3 covariance matrices are concatenated and
stored in a tuple of length 9n. Additionally, the optimal methods return the 6 × 6 covariance matrix of the pose
CovRelPose.
If an optimal gold-standard-algorithm is chosen the covariances of the image points (CovRR1, CovRC1, CovCC1,
CovRR2, CovRC2, CovCC2) can be incorporated in the computation. They can be provided for example by the
operator points_foerstner. If the point covariances are unknown, which is the default, empty tuples are
input. In this case the optimization algorithm internally assumes uniform and equal covariances for all points.
The value Error indicates the overall quality of the optimization process and is the root-mean-square Euclidean
distance in pixels between the points and their corresponding epipolar lines.
For the operator vector_to_rel_pose a special configuration of scene points and cameras exists: if all 3D
points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the relative
pose is not unique but twofold. As a consequence both solutions are computed and returned by the operator. This
means that all output parameters are of double length and the values of the second solution are simply concatenated
behind the values of the first one.
Parameters
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 6 || length(Rows1) >= 2
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
. CovRR1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 1.
Default: []
. CovRC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 1.
Default: []
. CovCC1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Column coordinate variance of the points in image 1.
Default: []
. CovRR2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Row coordinate variance of the points in image 2.
Default: []
. CovRC2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance of the points in image 2.
Default: []
HALCON 24.11.1.0
306 CHAPTER 5 3D RECONSTRUCTION
The operator depth_from_focus extracts the depth using a focus sequence. The images of the focus sequence
have to be passed as a multi channel image (MultiFocusImage). The depth for each pixel will be returned in
Depth as the channel number. The parameter Confidence returns a confidence value for each depth estimation:
The larger this value, the higher the confidence of the depth estimation is.
depth_from_focus selects the pixels with the best focus of all focus levels. The method used to extract these
pixels is specified by the parameters Filter and Selection.
For the parameter Filter, you can choose between the values ’highpass’ and ’bandpass’. To determine the focus
within the image a high- or a bandpass filter can be applied. The larger the filter response, the more in focus is
the image at this location. Compared to the highpass filter, the bandpass filter suppresses high frequencies. This is
useful in particular in images containing strong noise.
Optionally, you can smooth the filtered image using the mean filter by passing two additional integer values for
the mask size in the parameter Filter (e.g., [’highpass’, 7, 7]). This blurs the in-focus region with neighboring
pixels and thus allows to bridge small areas with no texture within the image. Note, however, that this smoothing
does not suppress noise in the original image, since it is applied only after high- or bandpass filtering.
The parameter Selection determines how the optimum focus level is selected. If you pass the value
’next_maximum’, the closest focus maximum in the neighborhood is used. In contrast, if you pass the value ’local’,
the focus level is determined based on the focus values of all focus levels of the pixel. With ’next_maximum’, you
typically achieve a slightly smoothed and more robust result.
This additional smoothing is useful if no telecentric lenses are used to take the input images. In this case, the
position of an object will slightly shift within the sequence. By adding appropriate smoothing, this effect can be
partially compensated.
Attention
If MultiFocusImage contains more than 255 channels (focus levels), Depth is clipped at 255, i.e. depth
values higher than 255 are ignored.
If the filter mask for Filter is specified with even values, the routine uses the next larger odd values instead (this
way the center of the filter mask is always explicitly determined).
If Selection is set to ’local’ and Filter is set to ’highpass’ or ’bandpass’, depth_from_focus can be
executed on OpenCL devices. If smoothing is enabled, the same restrictions and limitations as for mean_image
apply.
Note that filter operators may return unexpected results if an image with a reduced domain is used as input. Please
refer to the chapter Filters.
Parameters
. MultiFocusImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . multichannel-image(-array) ; object : byte
Multichannel gray image consisting of multiple focus levels.
. Depth (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Depth image.
. Confidence (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Confidence of depth estimation.
. Filter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / integer
Filter used to find sharp pixels.
Default: ’highpass’
Suggested values: Filter ∈ {’highpass’, ’bandpass’, 3, 5, 7, 9}
. Selection (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Method used to find sharp pixels.
Default: ’next_maximum’
List of values: Selection ∈ {’next_maximum’, ’local’}
Example
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,'highpass','next_maximum');
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
threshold(Confidence,HighConfidence,10,255);
reduce_domain(SharpImage,HighConfidence,ConfidentSharp);
HALCON 24.11.1.0
308 CHAPTER 5 3D RECONSTRUCTION
Execution Information
select_grayvalues_from_channels ( MultichannelImage,
IndexImage : Selected : : )
compose3(Focus0,Focus1,Focus2,&MultiFocus);
depth_from_focus(MultiFocus,&Depth,&Confidence,'highpass','next_maximum');
mean_image(Depth,&Smooth,15,15);
select_grayvalues_from_channels(MultiChannel,Smooth,SharpImage);
Execution Information
Possible Predecessors
depth_from_focus, mean_image
Possible Successors
disp_image
See also
count_channels
Module
Foundation
• create_stereo_model.
For the reconstruction of surfaces, the methods ’surface_pairwise’ and ’surface_fusion’ are avail-
able. For detailed information on these two methods, have a look at the reference manual entry of
reconstruct_surface_stereo.
HALCON 24.11.1.0
310 CHAPTER 5 3D RECONSTRUCTION
Set the image pairs (only for surface reconstruction): For the reconstruction of 3D surfaces, multiple binocular
stereo reconstructions are performed, and then combined. For the binocular reconstruction, image pairs
have to be specified. For example, for the three images shown above, the image pairs might be [0,1] and
[1,2]. The image pairs have to be specified using
• set_stereo_model_image_pairs,
• get_stereo_model_image_pairs.
• set_stereo_model_param,
you can optimize the settings of the 3D reconstruction for your setup.
When reconstructing surfaces, it is highly recommended to limit the 3D reconstruction using a bounding box
which is as tight as possible around the object that is to be reconstructed.
The bounding box, which is set with set_stereo_model_param, restricts the area where the object is
reconstructed, and thus can be used to reduce the runtime greatly.
• get_stereo_model_param.
• reconstruct_points_stereo or
• reconstruct_surface_stereo.
Get intermediate results (only for surface reconstruction): Note that to query these intermediate results, you
must enable the ’persistence’ mode for the stereo model with set_stereo_model_param before per-
forming the reconstruction.
With
• get_stereo_model_object,
you can access and inspect intermediate results of a surface reconstruction performed with
reconstruct_surface_stereo. These images can be used for troubleshooting the reconstruction
process.
With
• get_stereo_model_object_model_3d,
you can get the 3D object model that was reconstructed with reconstruct_surface_stereo as an
intermediate result using the Method ’surface_fusion’.
clear_stereo_model ( : : StereoModelID : )
HALCON 24.11.1.0
312 CHAPTER 5 3D RECONSTRUCTION
HALCON 24.11.1.0
314 CHAPTER 5 3D RECONSTRUCTION
You select the image pair of interest by specifying the corresponding camera indices [From, To] in
PairIndex. By setting one of the following values in ObjectName, the corresponding iconic objects are
then returned in Object:
’from_image_rect’, ’to_image_rect’: Rectified image corresponding to the from and to camera, respectively. Both
images can be used to inspect the quality of the internal binocular stereo image rectification.
’disparity_image’: Disparity image for this pair. The quality of the disparity image has a direct impact on the final
surface reconstruction.
’score_image’: Score image assigned to the disparity image for this pair.
A mismatch between the rectified images, i.e., features appearing in different rows in the two im-
ages, or errors in the disparity or the score image have direct impact on the quality of the fi-
nal surface reconstruction. Therefore, we recommend to correct any detected imperfections by adjust-
ing the stereo model parameters (see set_stereo_model_param), in particular those which con-
trol the internal usage of gen_binocular_rectification_map and binocular_disparity (see
set_stereo_model_image_pairs and reconstruct_surface_stereo for further details).
Parameters
. Object (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object(-array) ; object
Iconic result.
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. PairIndex (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / string / real
Camera indices of the pair ([From, To]).
Suggested values: PairIndex ∈ {0, 1, 2}
. ObjectName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the iconic result to be returned.
Suggested values: ObjectName ∈ {’from_image_rect’, ’to_image_rect’, ’disparity_image’, ’score_image’}
Execution Information
get_stereo_model_object_model_3d ( : : StereoModelID,
GenParamName : ObjectModel3D )
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the model parameters.
List of values: GenParamName ∈ {’m3d_pairwise’}
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Values of the model parameters.
Execution Information
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
reconstruct_surface_stereo, set_stereo_model_param
See also
set_stereo_model_param
Module
3D Metrology
get_stereo_model_param ( : : StereoModelID,
GenParamName : GenParamValue )
’type’: Type of the stereo model (currently either ’surface_pairwise’, ’surface_fusion’ or ’points_3d’).
’camera_setup_model’: Handle to a copy of the camera setup model set in the stereo model. Changing properties
of the copy does not affect the camera setup model stored in the stereo model.
’from_cam_param_rect N’, ’to_cam_param_rect N’: Camera parameters of the rectified from- and to-cameras of
camera pair N. See set_stereo_model_image_pairs for more information about camera pairs.
’from_cam_pose_rect N’, ’to_cam_pose_rect N’: Point transformation from the rectified from- and to-cameras of
camera pair N to the respective unrectified camera. See set_stereo_model_image_pairs for more
information about camera pairs.
’rel_pose_rect N’: Point transformation from the rectified to-camera to the rectified from-camera. See
set_stereo_model_image_pairs for more information about camera pairs.
The parameters ’type’ and ’camera_setup_model’ are set when creating the stereo model with
create_stereo_model. For ’from_cam_param_rect N’, ’to_cam_param_rect N’, ’from_cam_pose_rect N’,
’to_cam_pose_rect N’, and ’rel_pose_rect N’, note that these parameters are only available after setting the image
pairs (see set_stereo_model_image_pairs).
A note on tuple-valued model parameters
HALCON 24.11.1.0
316 CHAPTER 5 3D RECONSTRUCTION
Most of the stereo model parameters are single-valued. Thus, you can provide a list (i.e., tuple) of parameter names
and get a list (tuple) of values that has the same length as the output tuple. In contrast, when querying a tuple-valued
parameter, a tuple of values is returned. When querying such a parameter together with other parameters, the value-
to-parameter-name correspondence is not obvious anymore. Thus, tuple-valued parameters like ’bounding_box’,
’min_disparity’ or ’max_disparity’ should always be queried in a separate call to get_stereo_model_param.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the parameters to be set.
List of values: GenParamName ∈ {’type’, ’camera_setup_model’, ’bounding_box’, ’persistence’,
’sub_sampling_step’, ’rectif_interpolation’, ’rectif_sub_sampling’, ’rectif_method’, ’disparity_method’,
’binocular_method’, ’binocular_num_levels’, ’binocular_mask_width’, ’binocular_mask_height’,
’binocular_texture_thresh’, ’binocular_score_thresh’, ’binocular_filter’, ’binocular_sub_disparity’,
’binocular_mg_gray_constancy’, ’binocular_mg_gradient_constancy’, ’binocular_mg_smoothness’,
’binocular_mg_initial_guess’, ’binocular_mg_solver’, ’binocular_mg_cycle_type’,
’binocular_mg_pre_relax’, ’binocular_mg_post_relax’, ’binocular_mg_initial_level’,
’binocular_mg_iterations’, ’binocular_mg_pyramid_factor’, ’binocular_ms_surface_smoothing’,
’binocular_ms_edge_smoothing’, ’binocular_ms_consistency_check’, ’binocular_ms_similarity_measure’,
’binocular_ms_sub_disparity’, ’min_disparity’, ’max_disparity’, ’point_meshing’, ’poisson_depth’,
’poisson_solver_divide’, ’poisson_samples_per_node’, ’resolution’, ’surface_tolerance’, ’min_thickness’,
’smoothing’, ’color’, ’color_invisible’, ’from_cam_param_rect’, ’to_cam_param_rect’,
’from_cam_pose_rect’, ’to_cam_pose_rect’, ’rel_pose_rect’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; real / integer / string
Values of the parameters to be set.
Execution Information
The reconstructed 3D point coordinates are returned in the tuples X, Y, and Z, relative to the coordinate system
of the camera setup model (see create_camera_setup_model). The tuple PointIdxOut contains the
corresponding point indices.
The reconstruction algorithm works as follows: First, it identifies point correspondences for a given 3D point
by collecting all sets with the same PointIdx. Then, it uses the Row, Column, and CameraIdx informa-
tion from the collected sets to project lines of sight from each camera through the corresponding image point
[Row,Column]. If there are at least 2 lines of sight for the point PointIdx, they are intersected and the result is
stored as the set (X[J],Y[J],Z[J],PointIdxOut[J]). The intersection is performed with a least-squares
algorithm, without taking into account potentially invalid lines of sight (e.g., if an image point was falsely specified
as corresponding to a certain 3D point).
To compute the covariance matrices for the reconstructed 3D points, statistical information about the extracted
image coordinates, i.e., the covariance matrices of the image points (see , e.g., points_foerstner) are needed
as input and must be passed in the parameter CovIP. Otherwise, if no covariance matrices for the 3D points are
needed or no covariance matrices for the image points are available, an empty tuple can be passed in CovIP. Then
no covariance matrix for the reconstructed 3D points is computed.
The covariance matrix of an image point is:
(sigma_r)2
sigma_rc
CovIP =
sigma_rc (sigma_c)2
The covariance matrices are symmetric 2x2 matrices, whose entries in the main diagonal represent the variances
of the image point in row-direction and column-direction, respectively. For each image point, a covariance matrix
must be passed in CovIP in form of a tuple with 4 elements:
Thus, |CovIP|=4*|Row| and CovIP[I*4:I*4+3] is the covariance matrix for the I-th image point.
The computed covariance matrix for a successfully reconstructed 3D point is represented by a symmetric 3x3
matrix:
(sigma_x)2
sigma_xy sigma_xz
CovWP = sigma_yx (sigma_y)2 sigma_yz
sigma_zx sigma_zy (sigma_z)2
The diagonal entries represent the variances of the reconstructed 3D point in x-, y-, and z-direction. The computed
matrices are returned in the parameter CovWP in form of tuples each with 9 elements:
Thus, |CovWP|=9*|X| and CovWP[J*9:J*9+8] is the covariance matrix for the J-th 3D point. Note that
if the camera setup associated with the stereo model contains the covariance matrices for the camera parameters,
these covariance matrices are considered in the computation of CovWP too.
If the stereo model has a valid bounding box set (see set_stereo_model_param), the resulting points are
clipped to this bounding box, i.e., points outside it are not returned. If the bounding box associated with the stereo
model is invalid, it is ignored and all points that could be reconstructed are returned.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Row coordinates of the detected points.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Column coordinates of the detected points.
. CovIP (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Covariance matrices of the detected points.
Default: []
HALCON 24.11.1.0
318 CHAPTER 5 3D RECONSTRUCTION
reconstruct_surface_stereo (
Images : : StereoModelID : ObjectModel3D )
10. Readjust the parameters of the stereo model to improve the results with respect to quality and runtime with
set_stereo_model_param.
A camera setup model is associated with the stereo model StereoModelID upon its creation with
create_stereo_model. The camera setup must contain calibrated information about the cameras, with which
the images in the image array Images were acquired: the I-th image from the array corresponds to the camera
with index I-1 from the camera setup; the number of images in the array must be the same as the number of
cameras in the camera setup. The Images must represent a static scene or they must be taken simultaneously,
otherwise, the reconstruction of the surface might be impossible.
A well-calibrated camera setup is the main requirement for a precise surface reconstruction. Therefore, special
attention should be paid to obtaining a precise calibration of the cameras in the multi-view stereo setup used.
HALCON provides calibration of a multi-view setup with the operator calibrate_cameras. The resulting
calibrated camera setup can be accessed with a successive call to get_calib_data. Alternatively, for camera
setups with known parameters a calibrated camera setup can be created with create_camera_setup_model.
The proper selection of image pairs (see set_stereo_model_image_pairs) has an important role for the
general quality of the surface reconstruction. On the one hand, camera pairs with a small base line (small distance
between the camera centers) are better suited for the binocular stereo disparity algorithms. On the other hand,
in order to derive more accurate depth information of the scene, pairs with a long base line should be preferred.
Camera pairs should provide different points of view, such that if one pair does not see a certain area of the
surface, it is covered by another pair. Please note that the number of pairs linearly affects the runtime of the
pairwise reconstruction. Therefore, use "as many as needed and just as few as possible" image pairs in order to
handle the trade-off between completeness of the surface reconstruction and reconstruction runtime.
A bounding box is associated with the stereo model StereoModelID. For the surface stereo reconstruction,
it is required that the bounding box is valid (see set_stereo_model_param for further details). The recon-
struction algorithm needs the bounding box for three reasons:
• First, if MinDisparity and MaxDisparity were not set manually using the operators
create_stereo_model or set_stereo_model_param, it uses the projection of the bound-
ing box into both images of each image pair in order to estimate the values for MinDisparity
and MaxDisparity, which in turn are used in the internal call to binocular_disparity and
binocular_disparity_ms. In the case of using binocular_disparity_mg as disparity method,
suitable values for the parameters InitialGuess and ’initial_level’ are derived from the above-mentioned
parameters. However, the automatic estimation for this method is only used if called with default values for
the two parameters. Otherwise, the values as set by the user with set_stereo_model_param are used.
• Secondly, the default parameters for the fusion of pairwise reconstructions are calculated based on the bound-
ing box. They are reset in case the bounding box is changed. The bounding box should be tight around the
volume of interest. Else, the runtime will increase unnecessarily and drastically.
• Thirdly, the surface fragments lying outside the bounding box are clipped and are not re-
turned in ObjectModel3D. A too large bounding box results in a large difference be-
tween MinDisparity and MaxDisparity and this usually slows down the execution of
binocular_disparity, binocular_disparity_ms or binocular_disparity_mg and
therefore reconstruct_surface_stereo. A too small bounding box might result in clipping valid
surface areas.
Note that the method ’surface_fusion’ will try to produce a closed surface. If the object is only observed and
reconstructed from one side, the far end of the bounding box usually determines where the object is cut off.
Setting parameters of pairwise reconstruction before setting parameters of fusion is essential since the pair-
wise reconstruction of the object is input for the fusion algorithm. For a description of parameters, see
set_stereo_model_param. The choice of ’disparity_method’ has a major influence. The objects in the
scene should expose certain surface properties in order to make the scene suitable for the dense surface reconstruc-
tion. First, the surface reflectance should exhibit Lambertian properties as closely as possible (i.e., light falling on
the surface is scattered such that its apparent brightness is the same regardless of the angle of view). Secondly, the
surface should exhibit enough texture, but no repeating patterns.
get_stereo_model_object can be used to view intermediate results, in particular rectified, disparity and
score images. get_stereo_model_object_model_3d can be used to view the result of pairwise recon-
struction for models with Method=’surface_fusion’. See the paragraph "Troubleshooting for the configuration of
a stereo model" on how to use the obtained results.
HALCON 24.11.1.0
320 CHAPTER 5 3D RECONSTRUCTION
Reconstruction algorithm
The operator reconstruct_surface_stereo performs multiple binocular stereo reconstructions and sub-
sequently combines the results. The image pairs of this pairwise reconstruction are specified in StereoModelID
as pairs of cameras of an associated calibrated multi-view setup.
For each image pair, the images are rectified before internally one of the operators binocular_disparity,
binocular_disparity_mg or binocular_disparity_ms is called. The disparity informa-
tion is then converted to points in the coordinate system of the from-camera by an internal call of
disparity_image_to_xyz. In the next step, the points are transformed into the common coordinate sys-
tem that is specified in the camera setup model associated with StereoModelID and stored in a common point
cloud together with the points extracted from other image pairs.
surface_tolerance
min_thickness
(1) (2)
The parameters ’surface_tolerance’ and ’min_thickness’ regulate the fidelity to the initial surface obtained by
pairwise reconstruction. Points in a cone of sight of a camera are considered surely outside of the object (in
front of the surface) or surely inside the object (behind the surface) with respect to the given camera if their
distance to the initial surface exceeds ’surface_tolerance’. Points behind the surface (viewed from the given
camera) are only considered to lie inside the object if their distance to the initial surface does not exceed
’min_thickness’.
Each 3D point of the object model returned in ObjectModel3D is extracted from the isosurface where the
distance function equals zero. Its normal vector is calculated from the gradient of the distance function. While
the method ’surface_fusion’ requires the setting of more parameters than simple pairwise reconstruction,
post-processing of the obtained point cloud representing the object surface will probably get a lot simpler.
In particular, suppression of outliers, smoothing, equidistant sub-sampling and hole filling can be handled
nicely and often in high quality by this method. The same can be said about the possible internal meshing of
the output surface, see the next paragraph. Note that the algorithm will try to produce a closed surface. If the
object is only observed and reconstructed from one side, the far end of the bounding box usually determines
where the object is cut off. The method ’surface_fusion’ may take considerably longer than simple pairwise
reconstruction, depending mainly on the parameter ’resolution’.
Additionally, the so-obtained point cloud can be meshed in a post-processing step. The object model returned
in ObjectModel3D then contains the description of the mesh. For a stereo model of type ’surface_fusion’,
the algorithm ’marching tetrahedra’ is used which can be activated by setting the parameter ’point_meshing’
to ’isosurface’. The wanted meshed surface is extracted as the isosurface where the distance function equals
zero. Note that there are more points in ObjectModel3D if meshing of the isosurface is enabled even if
the used ’resolution’ is the same.
HALCON 24.11.1.0
322 CHAPTER 5 3D RECONSTRUCTION
The proper configuration of a stereo model is not always easy. Please follow the workflow above. If the recon-
struction results are not satisfactory, please consult the following hints and ideas:
Run in persistence mode If you enable the ’persistence’ mode of stereo model (call
set_stereo_model_param with GenParamName=’persistence’) a successive call to
reconstruct_surface_stereo will store intermediate iconic results, which provide addi-
tional information. They can be accessed by get_stereo_model_object_model_3d and
get_stereo_model_object.
Check the quality of the calibration
• If the camera setup was obtained by calibrate_cameras, it stores some quality information about
the camera calibration in form of standard deviations of the camera internal parameters. This informa-
tion is then carried in the camera setup model associated with the stereo model. It can be queried by
first calling get_stereo_model_param with GenParamName=’camera_setup_model’ and then
inspecting the camera parameter standard deviations by calling get_camera_setup_param with
GenParamName=’params_deviations’. Unusually big standard deviation values might indicate a bad
camera calibration.
• After setting the stereo model ’persistence’ mode, we recommend inspecting the rectified images for
each image pair. The rectified images are returned by get_stereo_model_object with a camera
index pair [From, To] specifying the pair of interest in the parameter PairIndex and the val-
ues ’from_image_rect’ and ’to_image_rect’ in ObjectName, respectively. If the images are properly
rectified, all corresponding image features must appear in the same row in both rectified images. A
discrepancy of several rows is a serious indication for a bad camera calibration.
Inspect the used bounding box Make sure that the bounding box is tight around the volume of interest. If the
parameters ’min_disparity’ and ’max_disparity’ are not set manually by using create_stereo_model
or set_stereo_model_param, the algorithm uses the projection of the bounding box into both im-
ages of each image pair in order to estimate the values for MinDisparity and MaxDisparity, which
in turn are used in the internal call to binocular_disparity and binocular_disparity_ms.
These values can be queried using get_stereo_model_param and if needed, can be adapted using
set_stereo_model_param. If the disparity values are set manually, the bounding box is only used
to restrict the reconstructed 3D points. In the case of using binocular_disparity_mg as disparity
method, suitable values for the parameters InitialGuess and ’initial_level’ are derived from the bound-
ing box. However, these values can also be reset using set_stereo_model_param. Use the procedures
gen_bounding_box_object_model_3d to create a 3D object model of your stereo model, and in-
spect it in conjunction with the reconstructed 3D object model to verify the bounding box visually.
Improve the quality of the disparity images After setting the stereo model ’persistence’ mode (see above),
inspect the disparity and the score images for each image pair. They are returned by
get_stereo_model_object with a camera index pair [From, To] specifying the pair of inter-
est in the parameter PairIndex and the values ’disparity_image’ and ’score_image’ in ObjectName,
respectively. If both images exhibit significant imperfection (e.g., the disparity image does not re-
ally resemble the shape of the object seen in the image), try to adjust the parameters used for the
internal call to binocular_disparity (the parameters with a ’binocular_’ prefix) by modifying
set_stereo_model_param until some improvement is achieved.
Alternatively, a different method to calculate the disparities can be used. Besides the above-
mentioned internal call of binocular_disparity, HALCON also provides the two other methods
binocular_disparity_mg and binocular_disparity_ms. These methods feature e.g., the cal-
culation of disparities in textureless regions at an expanse of the reconstruction time if compared with cross-
correlation methods. However, for these methods, it can be necessary to adapt the parameters to the un-
derlying dataset as well. Dependent on the chosen method, the user can either set the parameters with a
’binocular_mg_’ or a ’binocular_ms_’ prefix until some improvement is achieved.
A detailed description of the provided methods and their parameters can be found in
binocular_disparity, binocular_disparity_mg or binocular_disparity_ms, re-
spectively.
Fusion parameters If the result of pairwise reconstruction as inspected by
get_stereo_model_object_model_3d can not be improved anymore, begin to adapt the fu-
sion parameters. For a description of parameters see also set_stereo_model_param. Note that
the pairwise reconstruction is sometimes not discernible when the fusion algorithm can still tweak it into
something sensible. In any case, pairwise reconstruction should yield enough points as input for the fusion
algorithm.
Runtime
In order to improve the runtime, consider the following hints:
Extent of the bounding box The bounding box should be tight around the volume of interest. Else, the runtime
will increase unnecessarily and - for the method ’surface_fusion’ - drastically.
Reduce the domain of the input images Reducing the domain of the input images (e.g., with reduce_domain)
to the relevant part of the image may heavily speed up the algorithm, especially for large images.
Sub-sampling in the rectification step The stereo model parameter ’rectif_sub_sampling’ (see
set_stereo_model_param) controls the sub-sampling in the rectification step. Setting this fac-
tor to a value > 1.0 will reduce the resolution of the rectified images compared to the original images. This
factor has a direct impact on the succeeding performance of the chosen disparity method, but it causes
loss of image detail. The parameter ’rectif_interpolation’ could have also some impact, but typically not a
significant one.
Disparity parameters There is a trade-off between completeness of the pairwise surface reconstruction on the
one hand and reconstruction runtime on the other. The stereo model offers three different methods to
calculate the disparity images. Dependent on the chosen method, the stereo model provides a particu-
lar set of parameters that enables a precise adaption of the method to the used dataset. If the method
binocular_disparity is selected, only parameters with a ’binocular_’ prefix can be set. For the
method binocular_disparity_mg, all settable parameters have to exhibit the prefix ’binocular_mg_’,
whereas for the method binocular_disparity_ms only parameters with ’binocular_ms_’ are applica-
ble.
Parameters using the method binocular_disparity:
• NumLevels
• MaskWidth
• MaskHeight
• Filter
• SubDisparity
Each of these parameters of binocular_disparity has a corresponding stereo model parameter
written in snake case and with the prefix ’binocular_’, and has, some more or others less, impact on the
performance. Adapting them properly could improve the performance. performance.
Parameters using the method binocular_disparity_mg:
• GrayConstancy
• GradientConstancy
• Smoothness
• InitialGuess
• ’mg_solver’
• ’mg_cycle_type’
• ’mg_pre_relax’
• ’mg_post_relax’
• ’initial_level’
• ’iterations’
• ’pyramid_factor’
Each of these parameters of binocular_disparity_mg has a corresponding stereo model parame-
ter written in snake case and with the prefix ’binocular_mg_’, and has, some more or others less, impact
on the performance and the result. Adapting them properly could improve the performance.
Parameters using the method binocular_disparity_ms:
• SurfaceSmoothing
• EdgeSmoothing
• ’consistency_check’
• ’similarity_measure’
HALCON 24.11.1.0
324 CHAPTER 5 3D RECONSTRUCTION
• ’sub_disparity’
Each of these parameters of binocular_disparity_ms has a corresponding stereo model parame-
ter written in snake case and with the prefix ’binocular_ms_’, and has, some more or others less, impact
on the performance and the result. Adapting them properly could improve the performance.
Reconstruct only points with high disparity score Besides adapting the sub-sampling it is also possible to ex-
clude points of the 3D reconstruction because of their computed disparity score. In order to do this, the
user should first query the score images for the disparity values by calling get_stereo_model_object
using GenParamName = ’score_image’. Dependent on the distribution of these values, the user can de-
cide whether disparities with a score beneath a certain threshold should be excluded from the reconstruc-
tion. This can be achieved with set_stereo_model_param using either GenParamName = ’binoc-
ular_score_thresh’. The advantage of excluding points of the reconstruction is a slight speed-up since it is
not necessary to process the entire dataset. As an alternative to the above-mentioned procedure, it is also
possible to exclude points after executing reconstruct_surface_stereo by filtering reconstructed
3D points. The advantage of this is that at the expense of a slightly increased runtime, a second call to
reconstruct_surface_stereo is not necessary.
Sub-sampling of X,Y,Z data For the method ’surface_pairwise’, you can use a larger sub-sampling
step for the X,Y,Z data in the last step of the reconstruction algorithm by modifying
GenParamName=’sub_sampling_step’ with set_stereo_model_param. The reconstructed data
will be much sparser, thus speeding up the post-processing.
Fusion parameters For the method ’surface_fusion’, enlarging the parameter ’resolution’ will speed up the exe-
cution considerably.
Parameters
. Images (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage-array ; object : byte
An image array acquired by the camera setup associated with the stereo model.
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle to the resulting surface.
Execution Information
Note that after modifying these parameters, set_stereo_model_image_pairs must be executed again for
the changes to take effect.
The current list of image pairs in the model can be inspected by get_stereo_model_image_pairs.
Parameters
. StereoModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . stereo_model ; handle
Handle of the stereo model.
. From (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Camera indices for the from cameras in the image pairs.
Number of elements: From > 0
. To (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Camera indices for the to cameras in the image pairs.
Number of elements: To == From
Execution Information
HALCON 24.11.1.0
326 CHAPTER 5 3D RECONSTRUCTION
See also
set_stereo_model_param, get_stereo_model_image_pairs
Module
3D Metrology
’color’: By setting this parameter to one of the following values, the coloring of the reconstructed 3D object
model is either enabled or disabled (’none’). See reconstruct_surface_stereo on how to access
the resulting color information.
’median’ The color value of a 3D point is the median of the color values of all cameras where the 3D point
is visible.
’smallest_distance’ The color value of a 3D point corresponds to the color value of the camera that exhibits
the smallest distance to this 3D point.
’mean_weighted_distances’ All cameras that contribute to the reconstruction of a 3D point are weighted
according to their distance to the 3D point. Cameras with a smaller distance receive a higher weight,
whereas cameras with a larger distance get a lower weight. The color value of a 3D point is then
computed by averaging the weighted color values of the cameras.
’line_of_sight’ The color value of a 3D point corresponds to the color value of the camera that exhibits the
smallest angle between the point normal and the line of sight.
’mean_weighted_lines_of_sight’ All cameras that contribute to the reconstruction of a 3D point are weighted
according to their angle between the point normal and the line of sight. Cameras with a smaller angle
receive a higher weight. The color value of a 3D point is then computed by averaging the weighted color
values of the cameras.
List of values: ’none’, ’smallest_distance’, ’mean_weighted_distances’, ’line_of_sight’,
’mean_weighted_lines_of_sight’, ’median’.
Default: ’none’.
’color_invisible’: If stereo models of type ’surface_fusion’ are used, the reconstruction will contain points without
a direct correspondence to points in the images. These points are not seen by any of the cameras of the stereo
system and are therefore "invisible". A color value for these points has to be calculated using the color
of points in the vicinity. Coloring these "invisible" points can be switched off by setting this parameter
to ’false’. In this case invisible points are assigned 255 as gray value. Normally, coloring of "invisible"
points is not very time-consuming and can remain active. However, it may happen that the value for the
parameter ’resolution’ is considerably finer than the available image resolution. In this case, many invisible
3D points are reconstructed making the nearest neighbor search very time consuming. In order to avoid an
increased runtime, it is recommended to either adapt the value of ’resolution’ or to switch off the calculation
for invisible points. Please note that for stereo models of type ’surface_pairwise’, this parameter will not
have any effect.
List of values: ’true’, ’false’.
Default: ’true’.
’rectif_interpolation’: Interpolation mode for the rectification maps (see
set_stereo_model_image_pairs). Note that after changing this parameter, you must call
set_stereo_model_image_pairs again for the changes to take effect.
List of values: ’none’, ’bilinear’.
Default: ’bilinear’.
’rectif_sub_sampling’: Sub-sampling factor for the rectification maps (see
set_stereo_model_image_pairs). Note that after changing this parameter, you must call
set_stereo_model_image_pairs again for the changes to take effect.
Suggested values: 0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0.
Default: 1.0.
’rectif_method’: Rectification method for the rectification maps (see set_stereo_model_image_pairs).
Note that after changing this parameter, you must call set_stereo_model_image_pairs again for
the changes to take effect.
List of values: ’viewing_direction’, ’geometric’.
Default: ’viewing_direction’.
’disparity_method’: Method used to create disparity images from the image pairs (see
reconstruct_surface_stereo). Currently, the three methods ’binocular’, ’binocular_mg’
and ’binocular_ms’ are supported. Dependent on the chosen method, the HALCON operator
HALCON 24.11.1.0
328 CHAPTER 5 3D RECONSTRUCTION
HALCON 24.11.1.0
330 CHAPTER 5 3D RECONSTRUCTION
’point_meshing’: Enables the post-processing step for meshing the reconstructed surface points. For a stereo
model of type ’surface_pairwise’, a Poisson solver is supported. For a stereo model of type ’surface_fusion’,
a meshing of the isosurface is supported (see reconstruct_surface_stereo for more details).
List of values: ’none’, ’poisson’, ’isosurface’.
Default: ’none’.
If the Poisson-based meshing is enabled, the following parameters can be set:
• ’poisson_depth’: Depth of the solver octree. More detail (i.e., a higher resolution) of the resulting mesh
is achieved with deeper trees. However, this requires more time and memory.
Suggested values:
6, 8, 10.
Default: 8.
Restriction: 3 <= ’poisson_depth’ <= 12
• ’poisson_solver_divide’: Depth of block Gauss-Seidel solver used for solving the Poisson equation. At
the price of a small time overhead, this parameter reduces the memory consumption of the underlying
meshing algorithm. Proposed values are depths by 0 to 2 smaller compared to the main octree depth.
Suggested values: 6, 8, 10.
Default: 8.
Restriction: 3 <= ’poisson_solver_divide’ <= ’poisson_depth’
• ’poisson_samples_per_node’: Minimum number of points that should fall in a single octree leaf. This
parameter is used to handle noisy data, e.g., noise-free data can be distributed over many leaves, whereas
more noisy data should be stored in a single leaf to compensate for the noise. As a side effect, bigger
values of this parameter distribute the data in fewer leaves, which results in a smaller octree, which
means a speedup but possibly less detail of the reconstruction.
Suggested values: 1, 5, 10, 30, 40.
Default: 30.
’sub_sampling_step’: sub-sampling step for the X, Y and Z image data resulting from the pairwise
disparity estimation, before this data is used in its turn for the surface reconstruction (see
reconstruct_surface_stereo).
Suggested values: 1, 2, 3.
Default: 2.
’resolution’: Distance of neighboring sample points in each coordinate direction in discretization of bounding box.
’resolution’ is set in [m]. See reconstruct_surface_stereo for more details.
Too small values will unnecessarily increase the runtime. Too large values will lead to a reconstruction with
too few details. Per default, it is set to a coarse resolution depending on the bounding box. The parameter
will be reset if the bounding box is reset.
’smoothing’ may need to be adapted when ’resolution’ is changed.
’surface_tolerance’ should always be a bit larger than ’resolution’ in order to avoid effects of discretization.
Suggested values: 0.001, 0.01
’surface_tolerance’: Specifies how much noise around the input point cloud should be combined to a sur-
face. Points in a cone of sight of a camera are considered surely outside of the object (in front
of the surface) or surely inside the object (behind the surface) with respect to the given camera if
their distance to the initial surface exceeds ’surface_tolerance’. ’surface_tolerance’ is set in [m]. See
reconstruct_surface_stereo for more details and a figure.
Too small values lead to an uneven surface. Too large values smudge distinct surfaces into one. Per default,
it is set to three times ’resolution’. The parameter will be reset if the bounding box is reset.
’surface_tolerance’ should always be a bit larger than ’resolution’ in order to avoid effects of discretization.
’min_thickness’ always has to be larger than or equal to ’surface_tolerance’. If ’min_thickness’ is set too
HALCON 24.11.1.0
332 CHAPTER 5 3D RECONSTRUCTION
Result
estimate_al_am always returns the value 2 (H_MSG_TRUE).
Execution Information
HALCON 24.11.1.0
334 CHAPTER 5 3D RECONSTRUCTION
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image for which slant and albedo are to be estimated.
. Slant (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; real
Angle of the light sources and the positive z-axis (in degrees).
. Albedo (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Amount of light reflected by the surface.
Result
estimate_sl_al_zc always returns the value 2 (H_MSG_TRUE).
Execution Information
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Image for which the tilt is to be estimated.
. Tilt (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg(-array) ; real
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Result
estimate_tilt_zc always returns the value 2 (H_MSG_TRUE).
Execution Information
HALCON 24.11.1.0
336 CHAPTER 5 3D RECONSTRUCTION
parameters Slants and Tilts, remember that the illumination source is assumed to produce parallel light rays,
the camera has a telecentric lens, and the camera is placed orthogonal to the scene to reconstruct:
Slants The Slants angle is the angle between the optical axis of the camera and the direction of the illumina-
tion.
nt
Sla
Side view
Tilts The Tilts angle is measured within the object plane or any plane that is parallel to it, e.g., the image
plane. In particular, it describes the angle between the direction that points from the center of the image to
the right and the direction of light that is projected into the plane. That is, when looking at the image (or the
corresponding scene), a tilt angle of 0 means that the light comes from the right, a tilt angle of 90 means that
the light is coming from the top, a tilt angle of 180 means that the light is coming from the left, etc.
90°
t
Til
180° 0°
270°
Top view
As stated before, photometric stereo requires at least three images with different directions of illumination. How-
ever, the three-dimensional geometry of objects typically leads to shadow casting. In the shadow regions, the
number of effectively available directions of illumination is reduced, which leads to ambiguities. To nevertheless
get a robust result, redundancy is needed. Therefore, typically more than three light sources with different direc-
tions should be used. But note that an increasing number of illumination directions also leads to a higher number
of images to be processed and therefore to a higher processing time. In most applications, a number of four to six
light sources is reasonable. As a rule of thumb, the slant angles should be chosen between 30° and 60°. The tilt
angles typically should be equally distributed around the object to be measured. Please note that the directions of
illumination must be selected such that they do not lie in the same plane (i.e., the illumination directions must be
independent), otherwise the computing fails and an exception is thrown.
Input images and domains of definition
The input images must be provided in an image array (Images). Each image must have been taken with a different
direction of illumination as stated above. If the images are primarily stored in a multi-channel image, they can be
easily converted to an image array using image_to_channels. As an alternative, the image array can be
created using concat_obj.
photometric_stereo relies on the evaluation of the "photometric information", i.e., the gray values
stored in the images. Therefore, this information should be unbiased and accurate. We recommend to en-
sure that the camera that is used to acquire the images has a linear characteristic. You can use the opera-
tor radiometric_self_calibration to determine the characteristic of your camera and the operator
lut_trans to correct the gray value information in case of a non linear characteristic. Additionally, if accu-
rate measurements are required, we recommend to utilize the full dynamic range of the camera since this leads to
more accurate gray value information. For the same reason, using images with a bit depth higher than 8 (e.g., uint2
images instead of byte images) leads to a better accuracy.
The domain of definition of the input images determines which algorithm is used internally to process the Images.
Three algorithms are available:
• If all images have a full domain, the fastest algorithm is used. This mode is recommended for most applica-
tions.
• If the input images share the same reduced domain of definition, only the pixels within the domain are
processed. This mode can be used to exclude areas of the object from all images. Typically, areas are
excluded that are known to show non-Lambertian reflectance characteristics or that are of no interest, e.g.,
holes in the surface.
• If images with distinct domains of definition are provided, only the gray values that are contained in the
domains are used in the respective images. Then, only those pixels are processed that have independent slant
and tilt angles in at least three images. This mode is suitable, e.g., to exclude specific regions of individual
images from the processing. These can be, e.g., areas of the object for which is known that they show non-
Lambertian reflectance characteristics or regions for which is known that they contain biased photometric
information, e.g., shadows. To exclude such regions leads to more accurate results. Please note that this last
mode requires significantly more processing time than the modes that use the full domain or the same domain
for all images.
Output images
The operator can return the images for the reconstructed Gradient, Albedo, and the HeightField of the
surface:
• The Gradient image is a vector field that contains the partial derivative of the surface. Note that
Gradient can be used as input to reconstruct_height_field_from_gradient. For visu-
alization purposes, instead of the surface gradients normalized surface normals can be returned. Then,
ResultType must be set to ’normalized_surface_normal’ (legacy: ’normalized_gradient’) instead of ’gra-
dient’. Here, the row and column components represent the row and column components of the normal-
ized surface normal. If ResultType is set to ’all’, the default mode, i.e., ’gradient’ and not ’normal-
ized_surface_normal’ is used.
• The Albedo image describes the ratio of reflected radiation to incident radiation and has a value between one
(white surface) and zero (black surface). Thus, the albedo is a characteristic of the surface. For example, for
a printed surface it corresponds to the print image exclusive of any influences of the incident light (shading).
• The HeightField image is an image in which the pixel values correspond to a relative height.
By default, all of these iconic objects are returned, i.e., the parameter ResultType is set to ’all’. In case
that only some of these results are needed, the parameter ResultType can be set to a tuple specifying only
the required results among the values ’gradient’, ’albedo’, and ’height_field’. Note that in certain applications
like surface inspection tasks only the Gradient or Albedo images are required. Here, one can significantly
HALCON 24.11.1.0
338 CHAPTER 5 3D RECONSTRUCTION
increase the processing speed by not reconstructing the surface, i.e., by passing only ’gradient’ and ’albedo’ but
not ’height_field’ to ResultType.
Note that internally photometric_stereo first determines the gradient values and, if required, integrates
these values in order to obtain the height field. This integration is performed by the same algorithms that are
provided by the operator reconstruct_height_field_from_gradient and that can be controlled by
the parameters ReconstructionMethod, GenParamName, and GenParamValue. Please, refer to the
operator reconstruct_height_field_from_gradient for more information on these parameters. If
ResultType is set such that ’height_field’ is not one of the results, the parameters ReconstructionMethod,
GenParamName, and GenParamValue are ignored.
Attention
Note that photometric_stereo assumes square pixels. Additionally, it assumes that the heights are computed
on a lattice with step width 1 in object space. If this is not the case, i.e., if the pixel size of the camera projected
into the object space differs from 1, the returned height values must be multiplied by the actual step width (value
of the pixel size projected into the object space). The size of the pixel in object space is computed by dividing the
size of the pixel in the camera by the magnification of the (telecentric) lens.
Parameters
Result
If the parameters are valid, photometric_stereo returns the value 2 (H_MSG_TRUE). If necessary, an ex-
ception is raised.
Execution Information
Possible Predecessors
optimize_fft_speed
Module
3D Metrology
reconstruct_height_field_from_gradient (
Gradient : HeightField : ReconstructionMethod, GenParamName,
GenParamValue : )
HALCON 24.11.1.0
340 CHAPTER 5 3D RECONSTRUCTION
The optimization parameters for all algorithms can be saved and loaded by
write_fft_optimization_data and read_fft_optimization_data.
Non obvious applications
Please note that the operator reconstruct_height_field_from_gradient has various non-obvious
applications, especially in the field called gradient domain manipulation technique. In many applications, the
gradient values that are passed as input to the operator do not have the semantics of surface gradients (i.e., the
first derivatives of the height values), but are rather the first derivatives of other kinds of parameters, typically
gray values (then, the gradients have the semantics of gray value edges). When processing these gradient images
by various means, e.g., by adding or subtracting images, or by a filtering, the original gradient values are altered
and the subsequent call to reconstruct_height_field_from_gradient delivers a modified image, in
which, e.g., unwanted edges are removed or the contrast has been changed locally. Typical applications are noise
removal, seamless fusion of images, or high dynamic range compression.
Attention
reconstruct_height_field_from_gradient takes into account the values of all pixels in Gradient,
not only the values within its domain. If Gradient does not have a full domain, one could cut out the relevant
square part of the gradient field and generate a smaller image with full domain.
Parameters
. Gradient (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : vector_field
The gradient field of the image.
. HeightField (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image ; object : real
Reconstructed height field.
. ReconstructionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the reconstruction method.
Default: ’poisson’
List of values: ReconstructionMethod ∈ {’fft_cyclic’, ’rft_cyclic’, ’poisson’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the generic parameters.
Default: []
List of values: GenParamName ∈ {’optimize_speed’, ’caching’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer / real / string
Values of the generic parameters.
Default: []
List of values: GenParamValue ∈ {’standard’, ’patient’, ’exhaustive’, ’use_cache’, ’no_cache’,
’free_cache’}
Result
If the parameters are valid reconstruct_height_field_from_gradient returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
References
M. Kazhdan, M. Bolitho, and H. Hoppe: “Poisson Surface Reconstruction.” Symposium on Geometry Processing
(June 2006).
Module
3D Metrology
sfs_mod_lr reconstructs a surface (i.e. the relative height of each image point) using the modified algorithm of
Lee and Rosenfeld. The surface is reconstructed from the input image Image, and the light source given by the
parameters Slant, Tilt, Albedo and Ambient, and is assumed to lie infinitely far away in the direction given
by Slant and Tilt. The parameter Albedo determines the albedo of the surface, i.e. the percentage of light
reflected in all directions. Ambient determines the amount of ambient light falling onto the surface. It can be set
to values greater than zero if, for example, the white balance of the camera was badly adjusted at the moment the
image was taken.
Attention
sfs_mod_lr assumes that the heights are to be extracted on a lattice with step width 1. If this is not the case, the
calculated heights must be multiplied with the step width after the call to sfs_mod_lr. A Cartesian coordinate
system with the origin in the lower left corner of the image is used internally. sfs_mod_lr can only handle
byte-images.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte
Shaded input image.
. Height (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real
Reconstructed height field.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the positive z-axis (in degrees).
Default: 45.0
Suggested values: Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default: 45.0
Suggested values: Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of light reflected by the surface.
Default: 1.0
Suggested values: Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Value range: 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Albedo >= 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of ambient light.
Default: 0.0
Suggested values: Ambient ∈ {0.1, 0.5, 1.0}
Value range: 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Ambient >= 0.0
Result
If all parameters are correct sfs_mod_lr returns the value 2 (H_MSG_TRUE). Otherwise, an exception is raised.
Execution Information
HALCON 24.11.1.0
342 CHAPTER 5 3D RECONSTRUCTION
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc, optimize_fft_speed
Possible Successors
shade_height_field
Module
3D Metrology
HALCON 24.11.1.0
344 CHAPTER 5 3D RECONSTRUCTION
Possible Predecessors
estimate_al_am, estimate_sl_al_lr, estimate_sl_al_zc, estimate_tilt_lr,
estimate_tilt_zc, optimize_fft_speed
Possible Successors
shade_height_field
Module
3D Metrology
Parameters
. ImageHeight (input_object) . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / int4 / real
Height field to be shaded.
. ImageShade (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : byte
Shaded image.
. Slant (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the positive z-axis (in degrees).
Default: 0.0
Suggested values: Slant ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Slant ≤ 180.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Tilt (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . angle.deg ; real / integer
Angle between the light source and the x-axis after projection into the xy-plane (in degrees).
Default: 0.0
Suggested values: Tilt ∈ {1.0, 5.0, 10.0, 20.0, 40.0, 60.0, 90.0}
Value range: 0.0 ≤ Tilt ≤ 360.0 (lin)
Minimum increment: 0.01
Recommended increment: 10.0
. Albedo (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of light reflected by the surface.
Default: 1.0
Suggested values: Albedo ∈ {0.1, 0.5, 1.0, 5.0}
Value range: 0.0 ≤ Albedo ≤ 5.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Albedo >= 0.0
. Ambient (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Amount of ambient light.
Default: 0.0
Suggested values: Ambient ∈ {0.1, 0.5, 1.0}
Value range: 0.0 ≤ Ambient ≤ 1.0 (lin)
Minimum increment: 0.01
Recommended increment: 0.1
Restriction: Ambient >= 0.0
. Shadows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Should shadows be calculated?
Default: ’false’
Suggested values: Shadows ∈ {’true’, ’false’}
Result
If all parameters are correct shade_height_field returns the value 2 (H_MSG_TRUE). Otherwise, an excep-
tion is raised.
Execution Information
HALCON 24.11.1.0
346 CHAPTER 5 3D RECONSTRUCTION
Result
The operator uncalibrated_photometric_stereo returns the NormalField for the given images as
well as the appropriate gradients for each pixel and the Albedo of the object.
Execution Information
References
H. Hayakawa: “Photometric stereo under a light source with arbitrary motion”. Journal Optical Society America,
Vol. 11, No. 11/November 1994.
Module
3D Metrology
apply_sheet_of_light_calibration (
Disparity : : SheetOfLightModelID : )
’calibration’: extent of the calibration transformation which shall be applied to the disparity image. ’calibration’
must be set to ’xz’, ’xyz’ or ’offset_scale’. Refer to set_sheet_of_light_param for details on this
parameter.
’camera_parameter’: the internal parameters of the camera used for the measurement. This pose is required when
the calibration extent has been set to ’xyz’ or ’xz’.
’camera_pose’: the pose of the world coordinate system relative to the camera coordinate system. This pose is
required when the calibration extent has been set to ’xyz’ or ’xz’.
’lightplane_pose’: the pose of the light-plane coordinate system relative to the world coordinate system. The
light-plane coordinate system must be chosen so that its plane z=0 coincides with the light plane described
by the light line projector. This pose is required when the calibration extent has been set to ’xyz’ or ’xz’.
’movement_pose’: a pose representing the movement of the object between two successive profile images with re-
spect to the measurement system built by the camera and the laser. This pose is required when the calibration
extent has been set to ’xyz’. It is ignored when the calibration extent has been set to ’xz’.
’scale’: with this parameter you can scale the 3D coordinates X, Y and Z that result when applying the calibration
transformations to the disparity image. ’scale’ must be specified as the ratio desired unit/original unit. The
original unit is determined by the coordinates of the calibration object. If the original unit is meters (which is
the case if you use the standard calibration plate), you can set the desired unit directly by selecting ’m’, ’cm’,
’mm’ or ’um’ for the parameter Scale. By default, ’scale’ is set to 1.0.
Parameters
. Disparity (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Height or range image to be calibrated.
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
Example
HALCON 24.11.1.0
348 CHAPTER 5 3D RECONSTRUCTION
* ...
* Read an already acquired disparity map from file
read_image (Disparity, 'sheet_of_light/connection_rod_disparity.tif')
*
* Create a model and set the required parameters
gen_rectangle1 (ProfileRegion, 120, 75, 195, 710)
create_sheet_of_light_model (ProfileRegion, ['min_gray','num_profiles', \
'ambiguity_solving'], [70,290,'first'], \
SheetOfLightModelID)
set_sheet_of_light_param (SheetOfLightModelID, 'calibration', 'xyz')
set_sheet_of_light_param (SheetOfLightModelID, 'scale', 'mm')
set_sheet_of_light_param (SheetOfLightModelID, 'camera_parameter', \
CameraParameter)
set_sheet_of_light_param (SheetOfLightModelID, 'camera_pose', CameraPose)
set_sheet_of_light_param (SheetOfLightModelID, 'lightplane_pose', \
LightPlanePose)
set_sheet_of_light_param (SheetOfLightModelID, 'movement_pose', \
MovementPose)
*
* Apply the calibration transforms and
* get the resulting calibrated coordinates
apply_sheet_of_light_calibration (Disparity, SheetOfLightModelID)
get_sheet_of_light_result (X, SheetOfLightModelID, 'x')
get_sheet_of_light_result (Y, SheetOfLightModelID, 'y')
get_sheet_of_light_result (Z, SheetOfLightModelID, 'z')
*
Result
The operator apply_sheet_of_light_calibration returns the value 2 (H_MSG_TRUE) if the given pa-
rameters are correct. Otherwise, an exception will be raised.
Execution Information
• Create a sheet-of-light model with create_sheet_of_light_model and adapt the default parameters
to your specific measurement task.
• Set the initial parameters of the camera with set_sheet_of_light_param. So far, only pinhole cam-
eras with the division model are supported, i.e., only cameras of type ’area_scan_division’.
• Set the description file of the calibration object (created with create_sheet_of_light_calib_object)
with set_sheet_of_light_param.
HALCON 24.11.1.0
350 CHAPTER 5 3D RECONSTRUCTION
For this, the calibration object must be oriented such that either its front side or its back side intersect the
light plane first (i.e., the movement vector should be parallel to the Y axis of the calibration object, see
create_sheet_of_light_calib_object). As far as possible, the domain of the disparity image of the
calibration object should be restricted to the calibration object. Besides, the domain of the disparity image should
have no holes on the truncated pyramid. All four sides of the truncated pyramid must be clearly visible.
Calibration of the sheet-of-light setup
The calibration is then performed with calibrate_sheet_of_light. The returned Error is the RMS of
the distance of the reconstructed points to the calibration object in meters.
For sheet-of-light models calibrated with calibrate_sheet_of_light, in rare cases the parameters might
yield an unrealistic setup. However, the quality of measurements performed with the calibrated parameters is not
affected.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Average back projection error of the optimization.
Example
Result
The operator calibrate_sheet_of_light returns the value 2 (H_MSG_TRUE) if the calibration was suc-
cessful. Otherwise, an exception will be raised.
Execution Information
clear_sheet_of_light_model ( : : SheetOfLightModelID : )
HALCON 24.11.1.0
352 CHAPTER 5 3D RECONSTRUCTION
Width
HeightMin Z HeightMax
X Length
The calibration object consists of a ramp with a truncated pyramid rotated by 45 degrees. The calibration object
contains an orientation mark in the form of a circular hole. The dimensions of the calibration target in Width,
Length, HeightMin, and HeightMax must be given in meters. Length must be at least 10% larger than
Width. The Z coordinate of the highest point on the truncated pyramid is at most HeightMax. The calibration
object might not be found by calibrate_sheet_of_light if the height difference between the truncated
pyramid and the ramp is too small. In this case, adjust HeightMin and HeightMax accordingly or increase the
sampling rate when acquiring the calibration data.
The dimensions of the calibration object should be chosen such that it is possible to cover the measuring volume of
the sheet-of-light setup. In addition, when selecting the Length of the calibration object, the speed of the sheet-
of-light setup should be considered such that the calibration object is sampled with enough profile measurements.
b
h
t
HeightMax Øc
d
α
HeightMin
2⋅c
2⋅c 0.5⋅Width
Length
Width
Technical drawing of the calibration object, where c is the diameter of the orientation mark, d is the distance of
the pyramid from the front of the calibration object, h is the height of the truncated pyramid, b is the length of the
diagonal of the pyramid at the bottom, t is the corresponding length at the top, and α is the angle of the ramp as
seen in the drawing. You can calculate these dimensions with the procedure
get_sheet_of_light_calib_object_dimensions.
Set the parameter ’calibration_object’ to FileName with set_sheet_of_light_param to use the gener-
ated calibration object in a subsequent call to calibrate_sheet_of_light.
Note that MVTec does not offer 3D calibration objects. Instead, use
create_sheet_of_light_calib_object to generate a customized CAD model of a calibration
object. This CAD model can then be used to produce the calibration object. Milled aluminum is an established
material for this. However, depending on the required precision, its thermal stability may be a problem. Note that
the surface should be bright. Its color may have to be adjusted depending on the color of the laser to provide a
sufficient contrast to the color of the laser. Additionally, the surface must not be translucent nor reflective. To
achieve this, you can anodize or lacquer it. Please note that when lacquering it, the accuracy might be decreased
due to the applied paintwork. However, a surface that is too rough leads to a decreasing precision as well. It is
advisable to have the produced calibration object remeasured to determine whether the required accuracy can
be achieved. The accuracy of the calibration object should be ten times higher than the required accuracy of
measurement. After having the object measured, the results can be manually inserted into the DXF file that can
then be used for the calibration with calibrate_sheet_of_light.
Parameters
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Width of the object.
Default: 0.1
. Length (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Length of the object.
Default: 0.15
. HeightMin (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Minimum height of the ramp.
Default: 0.005
. HeightMax (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Maximum height of the ramp.
Default: 0.04
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Filename of the model of the calibration object.
Default: ’calib_object.dxf’
File extension: .dxf
Result
The operator create_sheet_of_light_calib_object returns the value 2 (H_MSG_TRUE) if the given
parameters are correct. Otherwise, an exception will be raised.
Execution Information
HALCON 24.11.1.0
354 CHAPTER 5 3D RECONSTRUCTION
information, refer to the operator set_sheet_of_light_param. If such information is not available, the re-
sult of the measurement is a disparity image, where each pixel holds a record of the subpixel precise position of
the detected profile.
The operator returns a handle to the sheet-of-light model in SheetOfLightModelID, which is used for all fur-
ther operations on the sheet-of-light model, like modifying parameters of the model, measuring profiles, applying
calibration transformations or accessing the results of measurements.
Mandatory input iconic parameters
In order to perform measurements, you will have to set the following input iconic parameter:
ProfileRegion: defines the region of the profile images, which will be processed by the operator
measure_profile_sheet_of_light. This region should be rectangular and can be generated e.g.,
by using the operator gen_rectangle1. If the region passed to ProfileRegion is not rectangular, its
smallest enclosing rectangle (bounding box) will be used. Note that ProfileRegion is only taken into
account by the operator measure_profile_sheet_of_light and is ignored when disparity images
are processed.
Please note that you have to take special care when using a handle of a sheet-of-light-model
SheetOfLightModelID in multiple threads. One and the same handle cannot be used concurrently in dif-
ferent threads if they modify the handle. Thus, you have to be careful especially if the threads call operators that
change the data of the handle. You can find an according hint in the ’Attention’ section of the operators. Anyway,
if you still want to use the same handle in operators that concurrently write into the handle in different threads you
have to synchronize the threads to assure that they do not access the same handle simultaneously. If you are not sure
if the usage of the same handle is thread-safe, please see the ’Attention’ section of the respective reference manual
entry if it contains a warning pointing to this problem. However, different handles can be used independently and
safely in different threads.
Parameters
. ProfileRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; object
Region of the images containing the profiles to be processed. If the provided region is not rectangular, its
smallest enclosing rectangle will be used.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Names of the generic parameters that can be adjusted for the sheet-of-light model.
Default: ’min_gray’
List of values: GenParamName ∈ {’min_gray’, ’method’, ’ambiguity_solving’, ’score_type’,
’num_profiles’, ’calibration’, ’scale’, ’scale_x’, ’scale_y’, ’scale_z’, ’offset_x’, ’offset_y’, ’offset_z’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; integer / real / string
Values of the generic parameters that can be adjusted for the sheet-of-light model.
Default: 50
Suggested values: GenParamValue ∈ {’default’, ’center_of_gravity’, ’last’, ’first’, ’brightest’, ’none’,
’intensity’, ’width’, ’offset_scale’, 50, 100, 150, 180}
. SheetOfLightModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle for using and accessing the sheet-of-light model.
Example
Result
The operator create_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Execution Information
HALCON 24.11.1.0
356 CHAPTER 5 3D RECONSTRUCTION
deserialize_sheet_of_light_model (
: : SerializedItemHandle : SheetOfLightModelID )
get_sheet_of_light_param ( : : SheetOfLightModelID,
GenParamName : GenParamValue )
Get the value of a parameter, which has been set in a sheet-of-light model.
The operator get_sheet_of_light_param is used to query the values of the different parameters of a sheet-
of-light model. The names of the desired parameters are passed in the generic parameter GenParamName, the
corresponding values are returned in GenParamValue. All these parameters can be set and changed at any time
with the operator set_sheet_of_light_param.
It is not possible to query the values of several parameters with a single operator call. In order to request the values
of several parameters you have to successively call of the operator get_sheet_of_light_param.
The values of the following model parameters can be queried:
Measurement of the profiles:
’method’: defines the method used to determine the position of the profile. The values ’default’ and ’cen-
ter_of_gravity’ both refer to the same method, whereby the position of the profile is determined column
by column with subpixel accuracy by computing the center of gravity of the gray values gi of all pixels
fulfilling the condition:
gi ≥ 0 min_gray 0
’min_gray’: the smallest gray values taken into account for the measurement of the position of the profile (see
’method’ above).
’num_profiles’: number of profiles for which memory has been allocated within the sheet-of-light model. By de-
fault, ’num_profiles’ is set to 512. If this number of profiles is exceeded during the measurement, memory
will be reallocated automatically at runtime. Since the reallocation process requires some time, we recom-
mend to set ’num_profiles’ to a reasonable value before the measurement is started.
’ambiguity_solving’: this model parameter determines which candidate shall be chosen, if the determination of
the position of the light line is ambiguous.
’first’: the first encountered candidate is returned. This method is the fastest.
’last’: the last encountered candidate is returned.
’brightest’: for each candidate, the brightness of the profile is computed and the candidate having the highest
brightness is returned. The brightness is computed according to:
n
1X
brightness = gi ,
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.
’score_type’: this model parameter selects which type of score will be calculated during the measurement of the
disparity. The score values give an advice on the quality of the computed disparity.
’none’: no score is computed.
’width’: for each pixel of the disparity, a score value is set to the local width of the profile (i.e., the number
of pixels used to compute the position of the profile).
’intensity’: for each pixel of the disparity, a score value is evaluated by computing the local intensity of the
profile according to:
n
1X
score = gi
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.
’calibration’: extent of the calibration transformation which shall be applied to the disparity image:
’none’: no calibration transformation is applied.
’xz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) are taken into account, but the movement of the object during the measure-
ment is not taken into account.
’xyz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) as well as the transformation which describe the movement of the object
during the measurement are taken into account.
HALCON 24.11.1.0
358 CHAPTER 5 3D RECONSTRUCTION
’offset_scale’: a simplified set of parameters to describe the setup, that can be used with default parameters
or can be controlled by six parameters. Three of the parameters describe an anisotropic scaling: ’scale_x’ de-
scribes the scaling of a pixel in column direction into the new x-axis, ’scale_y’ describes the linear movement
between two profiles, and ’scale_z’ describes the scaling of to measured disparities into the new z-axis. The
other three parameters describe the offset of the frame of reference of the resulting x,y,z values (’offset_x’,
’offset_y’, ’offset_z’).
’camera_parameter’: the internal parameters of the camera used for the measurement. Those parameters are
required when the calibration extent has been set to ’xz’ or ’xyz’.
’camera_pose’: the pose of the world coordinate system relative to the camera coordinate system. This pose is
required when the calibration extent has been set to ’xz’ or ’xyz’.
’lightplane_pose’: the pose of the light-plane coordinate system relative to the world coordinate system. The
light-plane coordinate system must be chosen so that its plane z=0 coincides with the light plane described
by the light line projector. This pose is required when the calibration extent has been set to ’xz’ or ’xyz’.
’movement_pose’: a pose representing the movement of the object between two successive profile images with re-
spect to the measurement system built by the camera and the laser. This pose is required when the calibration
extent has been set to ’xyz’.
’scale’: with this parameter you can scale the 3D coordinates X, Y and Z that result when applying the calibration
transformations to the disparity image. ’scale’ must be specified as the ratio desired unit/original unit. The
original unit is determined by the coordinates of the calibration object. If you use the standard calibration
plate the original unit is meter. This parameter can only be set if the calibration extent has been set to
’offset_scale’, ’xz’ or ’xyz’. By default, ’scale’ is set to 1.0.
’scale_x’: This value defines the width of a pixel in 3D space. The value is only applicable if the calibration extend
is set to ’offset_scale’. By default, ’scale_x’ is set to 1.0.
’scale_y’: This value defines the linear movement between two profiles in 3D space. The value is only applicable
if the calibration extend is set to ’offset_scale’. By default, ’scale_y’ is set to 10.0.
’scale_z’: This value defines the height of disparities in 3D space. The value is only applicable if the calibration
extend is set to ’offset_scale’. By default, ’scale_z’ is set to 1.0.
’offset_x’: This value defines the x offset of reference frame for 3D results. The value is only applicable if the
calibration extend is set to ’offset_scale’. By default, ’offset_x’ is set to 0.0.
’offset_y’: This value defines the y offset of reference frame for 3D results. The value is only applicable if the
calibration extend is set to ’offset_scale’. By default, ’offset_y’ is set to 0.0.
’offset_z’: This value defines the z offset of reference frame for 3D results. The value is only applicable if the
calibration extend is set to ’offset_scale’. By default, ’offset_z’ is set to 0.0.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter that shall be queried.
Default: ’method’
List of values: GenParamName ∈ {’min_gray’, ’method’, ’ambiguity_solving’, ’score_type’,
’num_profiles’, ’calibration’, ’camera_parameter’, ’camera_pose’, ’lightplane_pose’, ’movement_pose’,
’scale’, ’scale_x’, ’scale_y’, ’scale_z’, ’offset_x’, ’offset_y’, ’offset_z’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Value of the model parameter that shall be queried.
Result
The operator get_sheet_of_light_param returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information
Possible Predecessors
query_sheet_of_light_params, set_sheet_of_light_param
Possible Successors
measure_profile_sheet_of_light, set_sheet_of_light_param,
apply_sheet_of_light_calibration
Module
3D Metrology
Get the iconic results of a measurement performed with the sheet-of light technique.
The operator get_sheet_of_light_result provides access to the results of the calibrated and uncalibrated
measurements performed with a given sheet-of-light model. The different kinds of results can be selected by setting
the value of the parameter ResultName as described below:
Non-calibrated results:
’disparity’: the measured disparity i.e., the subpixel row value at which the profile was detected is returned for
each pixel. The disparity values can be considered as non-calibrated pseudo-range values.
’score’: the score values computed according to the value of the parameter ’score_type’ is returned. If
the parameter ’score_type’ has been set to ’none’, no score value is computed during the measure-
ment, therefore the returned image is empty. Refer to create_sheet_of_light_model and
set_sheet_of_light_param for details on the possible values of the model parameter ’score_type’.
Calibrated results:
Please, note that the pixel values of the images returned when setting ResultName to ’x’, ’y’ or ’z’ have the
semantic of coordinates with respect to the world coordinate system that is implicitly defined during the cali-
bration of the system. The unit of the returned coordinates depends on the value of the parameter ’scale’. (see
create_sheet_of_light_model and set_sheet_of_light_param for details on the possible values
of the model parameter ’scale’.)
The operator get_sheet_of_light_result returns an empty object if the desired result has not been com-
puted.
Parameters
. ResultValue (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : real
Desired measurement result.
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model to be used.
. ResultName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Specify which result of the measurement shall be provided.
Default: ’disparity’
List of values: ResultName ∈ {’disparity’, ’score’, ’x’, ’y’, ’z’}
Result
The operator get_sheet_of_light_result returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information
HALCON 24.11.1.0
360 CHAPTER 5 3D RECONSTRUCTION
get_sheet_of_light_result_object_model_3d (
: : SheetOfLightModelID : ObjectModel3D )
Get the result of a calibrated measurement performed with the sheet-of-light technique as a 3D object model.
The operator get_sheet_of_light_result_object_model_3d returns the result of a fully calibrated
sheet-of-light measurement as a 3D object model. The handle of the sheet-of-light model with which the mea-
surement is performed must be passed to SheetOfLightModelID. The calibration extent of the sheet-of-light
model (’calibration’) must have been set to ’xyz’ or ’offset_scale’ before applying the measurement, otherwise the
computed coordinates cannot be returned as a 3D object model and an exception is raised.
The handle of the 3D object model resulting from the measurement is returned in ObjectModel3D. For the
3D points within this 3D object model no triangular meshing is available, therefore no faces are stored in the
3D object model. If a 3D object model with triangular meshing is required for the subsequent processing, use
the operator get_sheet_of_light_result in order to retrieve the ’x’, ’y’, and ’z’ coordinates from the
sheet-of-light model and then call the operator xyz_to_object_model_3d with suitable parameters. Refer
to xyz_to_object_model_3d for more information about 3D object models.
The unit of the returned coordinates depends on the value of the parameter ’scale’ that was set for the
sheet-of-light model before applying the measurement. See create_sheet_of_light_model and
set_sheet_of_light_param for details on the possible values of the model parameter ’scale’. The op-
erator get_sheet_of_light_result_object_model_3d returns a handle to an empty 3D object model
if the desired result has not been measured yet.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle for accessing the sheet-of-light model.
. ObjectModel3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . object_model_3d ; handle
Handle of the resulting 3D object model.
Result
The operator get_sheet_of_light_result_object_model_3d returns the value 2 (H_MSG_TRUE) if
the given parameters are correct. Otherwise, an exception will be raised.
Execution Information
measure_profile_sheet_of_light (
ProfileImage : : SheetOfLightModelID, MovementPose : )
Process the profile image provided as input and store the resulting disparity to the sheet-of-light model.
The operator measure_profile_sheet_of_light processes the ProfileImage and stores the resulting
disparity values to the sheet-of-light model. Please note that ProfileImage will only be processed in the
region defined by ProfileRegion as set with the operator create_sheet_of_light_model. Since
ProfileImage is processed column by column, the profile must be oriented roughly horizontal.
Influence of different model parameters
If the model parameter ’score_type’ has been set to ’intensity’ or ’width’, score values are also computed and stored
into the model. Refer to set_sheet_of_light_param for details on the possible values of ’score_type’.
If the model parameter ’calibration’ has been set to ’xz’, ’xyz’, or ’offset_scale’ and all parameters required to deter-
mine the calibration transformation have been set to the sheet-of-light model, the calibration transformations will be
automatically applied to the disparity values after the measurement. Refer to set_sheet_of_light_param
for details on setting the calibration parameters to the sheet-of-light model.
Setting MovementPose
MovementPose describes the movement of the object between the acquisition of the previous profile and the
acquisition of the current profile.
If the model parameter ’calibration’ has been set to ’none’ or ’xz’ (see set_sheet_of_light_param)
the movement of the object is not taken into consideration by the calibration transformation. Therefore,
MovementPose is ignored, and it can be set to an empty tuple.
If the model parameter ’calibration’ has been set to ’xyz’, the pose describing the movement of the object must
be specified to the sheet-of-light model. This can be done here with MovementPose or with the parameter
’movement_pose’ in the operator set_sheet_of_light_param.
If the model parameter ’calibration’ has been set to ’offset_scale’, a movement can be specified, but it should be
considered, that the space to which this transformation is applied is most probably not metrically.
If the movement of the object between the recording of two successive profiles is constant, we recommend to
set MovementPose here to an empty tuple, and to set the constant pose via the parameter ’movement_pose’ in
the operator set_sheet_of_light_param. This configuration is often encountered, for example when the
object under measurement is moved by a conveyor belt and measured by a fixed measurement system.
If the movement of the object between the recording of two successive profiles is not constant, for example because
the measurement system is moved over the object by a robot, you must set MovementPose here for each call of
measure_profile_sheet_of_light.
MovementPose must be expressed in the world coordinate system that is implicitly defined during the calibration
of the measurement system.
Parameters
. ProfileImage (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image.
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. MovementPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / real
Pose describing the movement of the scene under measurement between the previously processed profile
image and the current profile image.
Result
The operator measure_profile_sheet_of_light returns the value 2 (H_MSG_TRUE) if the given param-
eters are correct. Otherwise, an exception will be raised.
Execution Information
HALCON 24.11.1.0
362 CHAPTER 5 3D RECONSTRUCTION
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
apply_sheet_of_light_calibration, get_sheet_of_light_result
See also
query_sheet_of_light_params, get_sheet_of_light_param,
get_sheet_of_light_result, apply_sheet_of_light_calibration
Module
3D Metrology
query_sheet_of_light_params ( : : SheetOfLightModelID,
QueryName : GenParamName )
For a given sheet-of-light model get the names of the generic iconic or control parameters that can be used in the
different sheet-of-light operators.
The operator query_sheet_of_light_params returns the names of the generic parameters that are sup-
ported by the following operators create_sheet_of_light_model, set_sheet_of_light_param,
get_sheet_of_light_param and get_sheet_of_light_result. The parameter QueryName is
used to select the desired parameter group:
The returned parameter list does not depend on the current state of the model or its results.
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. QueryName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the parameter group.
Default: ’create_model_params’
List of values: QueryName ∈ {’create_model_params’, ’set_model_params’, ’get_model_params’,
’get_result_objects’}
. GenParamName (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string
List containing the names of the supported generic parameters.
Result
The operator query_sheet_of_light_params returns the value 2 (H_MSG_TRUE) if the given parameters
are correct. Otherwise, an exception will be raised.
Execution Information
reset_sheet_of_light_model ( : : SheetOfLightModelID : )
HALCON 24.11.1.0
364 CHAPTER 5 3D RECONSTRUCTION
• SheetOfLightModelID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
See also
clear_sheet_of_light_model
Module
3D Metrology
serialize_sheet_of_light_model (
: : SheetOfLightModelID : SerializedItemHandle )
set_profile_sheet_of_light (
ProfileDisparityImage : : SheetOfLightModelID, MovementPoses : )
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
get_sheet_of_light_result, get_sheet_of_light_result_object_model_3d
See also
query_sheet_of_light_params, get_sheet_of_light_param,
get_sheet_of_light_result, apply_sheet_of_light_calibration
Module
3D Metrology
’method’: defines the method used to determine the position of the profile. The values ’default’ and ’cen-
ter_of_gravity’ both refer to the same method, whereby the position of the profile is determined column
HALCON 24.11.1.0
366 CHAPTER 5 3D RECONSTRUCTION
by column with subpixel accuracy by computing the center of gravity of the gray values gi of all pixels
fulfilling the condition:
gi ≥ 0 min_gray 0
’min_gray’: lowest gray values taken into account for the measurement of the position of the profile (see ’cen-
ter_of_gravity’).
Suggested values: 20, 50, 100, 128, 200, 220, 250
Default: 100
’num_profiles’: number of profiles for which memory has been allocated within the sheet-of-light model. By
default, ’num_profiles’ is set to 512. If this number of profiles is exceeded, memory will be reallocated
automatically during the measurement.
Suggested values: 1, 2, 50, 100, 512, 1024, 3000
Default: 512
’ambiguity_solving’: method applied to determine which candidate shall be chosen if the determination of the
position of the profile is ambiguous.
’first’: the first encountered candidate is returned. This method is the fastest.
’last’: the last encountered candidate is returned.
’brightest’: for each candidate, the brightness of the profile is computed and the candidate having the highest
brightness is returned. The brightness is computed according to:
n
1X
brightness = gi ,
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.
Default: ’first’
’score_type’: method used to calculate a score for the measurement of the position of the profile.
’none’: no score is computed.
’width’: for each pixel of the disparity, the score value is set to the number of pixels used to determine the
disparity value.
’intensity’: for each pixel of the disparity, a score value is evaluated by computing the local intensity of the
profile according to:
n
1X
score = gi
n i=0
where gi is the gray value of the pixel and n the number of pixels taken into consideration to determine the
position of the profile.
Default: ’none’
’calibration’: extent of the calibration transformation which shall be applied to the disparity image:
’none’: no calibration transformation is applied.
’xz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) are taken into account, but the movement of the object during the measure-
ment is not taken into account.
’xyz’: the calibration transformations which describe the geometrical properties of the measurement system
(camera and light line projector) as well as the transformation which describe the movement of the object
during the measurement are taken into account.
’offset_scale’: a simplified set of parameters to describe the setup, that can be used with default parameters
or can be controlled by six parameters. Three of the parameters describe an anisotropic scaling: ’scale_x’ de-
scribes the scaling of a pixel in column direction into the new x-axis, ’scale_y’ describes the linear movement
between two profiles, and ’scale_z’ describes the scaling of to measured disparities into the new z-axis. The
other three parameters describe the offset of the frame of reference of the resulting x,y,z values (’offset_x’,
’offset_y’, ’offset_z’).
Default: ’none’
’camera_parameter’: the internal parameters of the camera used for the measurement. Those parameters are
required if the calibration extent has been set to ’xz’ or ’xyz’. If calibrate_sheet_of_light shall be
used for calibration, this parameter is used to set the initial camera parameters.
’calibration_object’: the calibration object used for calibration with calibrate_sheet_of_light. If
calibrate_sheet_of_light shall be used for calibration, this parameter must be set to the filename
of a calibration object created with create_sheet_of_light_calib_object.
’camera_pose’: the pose that transforms the camera coordinate system into the world coordinate system, i.e., the
pose that could be used to transform point coordinates from the world coordinate system into the camera
coordinate system. This pose is required if the calibration extent has been set to ’xz’ or ’xyz’.
Note that the world coordinate system is implicitly defined by setting the ’camera_pose’.
’lightplane_pose’: the pose that transforms the light plane coordinate system into the world coordinate system,
i.e., the pose that could be used to transform point coordinates from the world coordinate system into the
light plane coordinate system. The light plane coordinate system must be chosen such that the plane z=0
coincides with the light plane. This pose is required if the calibration extent has been set to ’xz’ or ’xyz’.
’movement_pose’: a pose representing the movement of the object between two successive profile images with
respect to the measurement system built by the camera and the laser. This pose must be expressed in the
world coordinate system. It is required if the calibration extent has been set to ’xyz’.
’scale’: with this value you can scale the 3D coordinates X, Y and Z that result when applying the calibration
transformations to the disparity image. The model parameter ’scale’ must be specified as the ratio desired
unit/original unit. The original unit is determined by the coordinates of the calibration object. If the original
unit is meters (which is the case if you use the standard calibration plate), you can set ’scale’ to the desired
unit directly by selecting ’m’, ’cm’, ’mm’, ’microns’, or ’um’. This parameter can only be set if the calibration
extent has been set to ’offset_scale’, ’xz’ or ’xyz’.
Suggested values: ’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6
Default value: 1.0
’scale_x’: This value defines the width of a pixel in the 3D space. This parameter can only be set if the calibration
extent has been set to ’offset_scale’.
Suggested values: 10.0, 1.0, 0.01, 0.001, 1.0e-6
Default value: 1.0
’scale_y’: This value defines the linear movement between two profiles in the 3D space. This parameter can only
be set if the calibration extent has been set to ’offset_scale’.
Suggested values: 100.0, 10.0, 1.0, 0.1, 1.0e-6
Default value: 10.0
’scale_z’: This value defines the height of a pixel in the 3D space. This parameter can only be set if the calibration
extent has been set to ’offset_scale’.
Suggested values: 10.0, 1.0, 0.01, 0.001, 1.0e-6
Default value: 1.0
’offset_x’: This value defines the x offset of reference frame for 3D results. This parameter can only be set if the
calibration extent has been set to ’offset_scale’.
Suggested values: 10.0, 0.0, 0.01, 0.001, 1.0e-6
Default value: 0.0
’offset_y’: This value defines the y offset of reference frame for xyz results. This parameter can only be set if the
calibration extent has been set to ’offset_scale’.
Suggested values: 10.0, 0.0, 0.01, 0.001, 1.0e-6
Default value: 0.0
’offset_z’: This value defines the z offset of reference frame for 3D results. This parameter can only be set if the
calibration extent has been set to ’offset_scale’.
Suggested values: 10.0, 0.0, 0.01, 0.001, 1.0e-6
Default value: 0.0
HALCON 24.11.1.0
368 CHAPTER 5 3D RECONSTRUCTION
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the model parameter that shall be adjusted for the sheet-of-light model.
Default: ’method’
List of values: GenParamName ∈ {’method’, ’ambiguity_solving’, ’score_type’, ’num_profiles’,
’min_gray’, ’scale’, ’calibration’, ’calibration_object’, ’camera_parameter’, ’camera_pose’, ’lightplane_pose’,
’movement_pose’, ’scale_x’, ’scale_y’, ’scale_z’, ’offset_x’, ’offset_y’, ’offset_z’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Value of the model parameter that shall be adjusted for the sheet-of-light model.
Default: ’center_of_gravity’
Suggested values: GenParamValue ∈ {’default’, ’center_of_gravity’, ’last’, ’first’, ’brightest’, ’none’,
’intensity’, ’width’, ’xz’, ’xyz’, ’offset_scale’, ’m’, ’cm’, ’mm’, ’um’, ’microns’, 1.0, 1e-2, 1e-3, 1e-6}
Result
The operator set_sheet_of_light_param returns the value 2 (H_MSG_TRUE) if the given parameters are
correct. Otherwise, an exception will be raised.
Execution Information
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
get_sheet_of_light_param, measure_profile_sheet_of_light,
apply_sheet_of_light_calibration
Alternatives
create_sheet_of_light_model
See also
query_sheet_of_light_params, get_sheet_of_light_param,
get_sheet_of_light_result
Module
3D Metrology
write_sheet_of_light_model ( : : SheetOfLightModelID,
FileName : )
Parameters
. SheetOfLightModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sheet_of_light_model ; handle
Handle of the sheet-of-light model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the sheet-of-light model file.
Default: ’sheet_of_light_model.solm’
File extension: .solm
Result
The operator write_sheet_of_light_model returns the value 2 (H_MSG_TRUE) if the passed handle is
valid and if the model can be written into the named file. Otherwise, an exception is raised.
Execution Information
Create a structured light model: In the first step, a structured light model is created with
• create_structured_light_model (ModelType=’3d_reconstruction’)
or read with
• read_structured_light_model.
Set the model parameters: The different structured light model parameters can then be set with
• set_structured_light_model_param
or queried with
• get_structured_light_model_param.
HALCON 24.11.1.0
370 CHAPTER 5 3D RECONSTRUCTION
Generate the pattern images: The pattern images are to be generated with
gen_structured_light_pattern after setting all relevant parameters. Please ensure that the
output images are as needed in the particular setup.
Use the patterns to illuminate the surface and acquire the camera images: At this stage, the pattern images
are projected. The respective image of the illuminated surface is acquired by the camera for each pattern
image.
When calibrating the system, images of the illuminated calibration object need to be acquired. The calibra-
tion process is shown in detail in the example program structured_light_calibration.hdev.
The obtained calibration information can then be specified with the parameter ’camera_setup_model’ of
set_structured_light_model_param.
Decode the acquired images: The acquired CameraImages can be decoded with
decode_structured_light_pattern. Upon calling this operator, the correspondence images are
created and stored in the model StructuredLightModel.
Get the results: The decoded ’correspondence_image’, as well as other results can be queried with
get_structured_light_object. For more details of the different objects that can be queried, please
refer to the operator’s documentation.
Perform the reconstruction: The reconstructed surface can be obtained with
reconstruct_surface_structured_light.
Further operators
The structured light model offers various other operators that help access and update the various parameters of the
model.
The operator write_structured_light_model enables writing the structured light model to a file. Please
note that previously generated pattern images are not written in this file. A structured light model file can be read
using read_structured_light_model.
Furthermore, it is possible to serialize and deserialize the structured light model using
serialize_structured_light_model and deserialize_structured_light_model.
Further Information
See also the “Solution Guide Basics” for further details. For a list of operators, please refer to Inspection
/ Structured Light.
Calibration
This chapter gives guidance regarding the basic concept of retrieving the internal and external parameters of your
camera. The following paragraphs state how to successfully calibrate a camera. In particular, they describe
371
372 CHAPTER 6 CALIBRATION
• the available 3D camera models and how 3D points are transformed into the image coordinate system, and
• limitations related to specific camera types.
Calibration Object
For a successful calibration of your camera setup, at least one calibration object with accurately known metric
properties is needed, e.g., a HALCON calibration plate. For the calibration, take a series of images of the calibra-
tion object in different positions and orientations. The success of the calibration highly depends on the quality of
the calibration object and the images. So you might want to exercise special diligence during the acquisition of the
calibration images. See the section “How to take a set of suitable images?” for further information.
A calibration plate is covered by multiple calibration marks, which are extracted in the calibration images in order
to retrieve their coordinates. The orientation of the plate has to be known distinctively, hence, a finder pattern is
also part of the imprint.
Your distributor can provide you with two different types of standard HALCON calibration plates:
Calibration plate with hexagonally arranged marks: As finder pattern, there are special groups of mark
hexagons where some of the marks contain dot-shaped holes (see create_caltab). One finder pat-
tern has to be visible to locate the calibration plate. To make sure the plate is not inverted, at least a second
one needs to be seen, but the plate does not have to be fully visible in the image. The origin of the coordinate
system is located at the center of the central mark of the first finder pattern. The z-axis of the coordinate
system is pointing into the calibration plate, its x-axis is pointing to the right, and its y-axis is pointing
downwards with the direction of view along the z-axis.
When using camera_calibration instead of calibrate_cameras, this calibration plate is not
applicable.
Calibration plate with rectangularly arranged marks: The finder pattern consists of the surrounding frame and
the triangular corner marker (see gen_caltab). Thus, the plate has to be fully visible in the image. The
origin is located in the middle of the surface of the calibration plate. The z-axis of the coordinate system
is pointing into the calibration plate, its x-axis is pointing to the right, and its y-axis is pointing downwards
with the direction of view along the z-axis.
When acquiring your calibration images, note that there are different recommendations on how to take them,
depending on your used calibration plate (see section “How to take a set of suitable images?”).
Preparing the Calibration Input Data
Before calling a calibration operator (e.g., calibrate_cameras), you must create and adapt the calibration
data model with the following steps:
1. Create a calibration data model with the operator create_calib_data, specifying the number of
cameras in the setup and the number of used calibration objects.
2. Specify the camera type and the initial internal camera parameters with the operator
set_calib_data_cam_param.
3. Specify the description of all calibration objects with the operator
set_calib_data_calib_object.
HALCON 24.11.1.0
374 CHAPTER 6 CALIBRATION
After a successful calibration, the root mean square error (RMSE) of the back projection of the optimization is
returned in Error (in pixels). The error gives a general indication whether the optimization was successful as it
corresponds to the average distance (in pixels) between the back projected calibration points and their extracted
image coordinates.
If only a single camera is calibrated, an Error in the order of 0.1 pixel (the typical detection error by extraction
of the coordinates of the projected calibration markers) is an indication that the optimization fits the observation
data well. If Error strongly differs from 0.1 pixels, the calibration did not perform well. Reasons for this might
be, e.g., a poor image quality, an insufficient number of calibration images, or an inaccurate calibration plate.
For information about how to check the success of the calibration using a multi-view camera setup, see the respec-
tive section in the chapter Calibration / Multi-View.
Camera Parameters
Regarding camera parameters, you can distinguish between internal and external camera parameters.
Internal camera parameters: These parameters describe the characteristics of the used camera, especially the di-
mension of the sensor itself and the projection properties of the used combination of lens, camera, and frame
grabber. Below is an overview of all available camera types and their respective parameters CameraParam.
In the list, “projective cameras” refers to the property that the lens performs a perspective projection on the
object-side of the lens, while “telecentric cameras” refers to the property that the lens performs a telecentric
projection on the object-side of the lens.
Area scan cameras have 9 to 16 internal parameters depending on the camera type.
For reasons explained below, parameters that are marked with an * asterisk are fixed and not estimated
by the algorithm.
Area scan cameras with regular lenses
Projective area scan cameras with regular lenses
• ’area_scan_division’:
[’area_scan_division’, Focus, Kappa, Sx, Sy*, Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_polynomial’:
[’area_scan_polynomial’, Focus, K1, K2, K3, P1, P2, Sx, Sy*, Cx, Cy, ImageWidth, Image-
Height]
Telecentric area scan cameras with regular lenses
• ’area_scan_telecentric_division’:
[’area_scan_telecentric_division’, Magnification, Kappa, Sx, Sy*, Cx, Cy, ImageWidth, Im-
ageHeight]
• ’area_scan_telecentric_polynomial’:
[’area_scan_telecentric_polynomial’, Magnification, K1, K2, K3, P1, P2, Sx, Sy*, Cx, Cy,
ImageWidth, ImageHeight]
Area scan cameras with tilt lenses
Projective area scan cameras with tilt lenses
• ’area_scan_tilt_division’:
[’area_scan_tilt_division’, Focus, Kappa, ImagePlaneDist, Tilt, Rot, Sx, Sy*, Cx, Cy, Im-
ageWidth, ImageHeight]
• ’area_scan_tilt_polynomial’:
[’area_scan_tilt_polynomial’, Focus, K1, K2, K3, P1, P2, ImagePlaneDist, Tilt, Rot, Sx, Sy*,
Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_tilt_image_side_telecentric_division’:
[’area_scan_tilt_image_side_telecentric_division’, Focus, Kappa, Tilt, Rot, Sx*, Sy*, Cx, Cy,
ImageWidth, ImageHeight]
• ’area_scan_tilt_image_side_telecentric_polynomial’:
[’area_scan_tilt_image_side_telecentric_polynomial’, Focus, K1, K2, K3, P1, P2, Tilt, Rot,
Sx*, Sy*, Cx, Cy, ImageWidth, ImageHeight]
Telecentric area scan cameras with tilt lenses
• ’area_scan_tilt_bilateral_telecentric_division’:
[’area_scan_tilt_bilateral_telecentric_division’, Magnification, Kappa, Tilt, Rot, Sx*, Sy*,
Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_tilt_bilateral_telecentric_polynomial’:
[’area_scan_tilt_bilateral_telecentric_polynomial’, Magnification, K1, K2, K3, P1, P2, Tilt,
Rot, Sx*, Sy*, Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_tilt_object_side_telecentric_division’:
[’area_scan_tilt_object_side_telecentric_division’, Magnification, Kappa, ImagePlaneDist,
Tilt, Rot, Sx, Sy*, Cx, Cy, ImageWidth, ImageHeight]
• ’area_scan_tilt_object_side_telecentric_polynomial’:
[’area_scan_tilt_object_side_telecentric_polynomial’, Magnification, K1, K2, K3, P1, P2,
ImagePlaneDist, Tilt, Rot, Sx, Sy*, Cx, Cy, ImageWidth, ImageHeight]
Area scan cameras with hypercentric lenses
Projective area scan cameras with hypercentric lenses
• ’area_scan_hypercentric_division’:
[’area_scan_hypercentric_division’, Focus, Kappa, Sx, Sy*, Cx, Cy, ImageWidth, Image-
Height]
• ’area_scan_hypercentric_polynomial’:
[’area_scan_hypercentric_polynomial’, Focus, K1, K2, K3, P1, P2, Sx, Sy*, Cx, Cy, Im-
ageWidth, ImageHeight]
Description of the internal camera parameters of area scan cameras:
HALCON 24.11.1.0
376 CHAPTER 6 CALIBRATION
y
x
z 2. tilt
y
x
z 1. rot
Im
a ge
Pl
an
eD
is
t
The tilt of the lens is described by the parameters rot , tilt and ImagePlaneDist. rot
describes the orientation of the tilt axis in relation to the x-axis of the sensor and has to be
applied first. tilt describes the actual tilt of the lens. ImagePlaneDist is the distance of
the exit pupil of the lens to the image plane.
These angles are typically roughly known based on the considerations that led to the use of
the tilt lens or can be read off from the mechanism by which the lens is tilted.
Sx, Sy: Scale factors. They correspond to the horizontal and vertical distance between two neigh-
boring cells on the sensor. Since in most cases the image signal is sampled line-synchronously,
Sy is determined by the dimension of the sensor and does not need to be estimated by the cal-
ibration process.
The initial values depend on the dimensions of the used chip of the camera. See the technical
specification of your camera for the actual values. Attention: These values increase if the
image is subsampled!
As projective cameras are described through the pinhole camera model, it is impossible to
determine Focus, Sx , and Sy simultaneously. Therefore, the algorithm will keep Sy fixed.
For telecentric lenses, it is impossible to determine Magnification, Sx , and Sy simulta-
neously. Therefore, the algorithm will keep Sy fixed.
For image-side telecentric tilt lenses (see chapter “Basics”, section “Camera Model and Pa-
rameters” in the “Solution Guide III-C 3D Vision” for an overview of different
types of tilt lenses), it is impossible to determine Focus, Sx , Sy , and the tilt parameters tilt
and rot simultaneously. Therefore, additionally to Sy , the algorithm will keep Sx fixed.
For bilateral telecentric tilt lenses, it is impossible to determine Magnification, Sx ,
Sy , and the tilt parameters tilt and rot simultaneously. Therefore, additionally to Sy , the
algorithm will keep Sx fixed.
Cx, Cy: Column (Cx ) and row (Cy ) coordinate of the principal point of the image (center of the
radial distortion).
Use the half image width and height as initial values. Attention: These values decrease if the
image is subsampled!
ImageWidth, Image Height: Width and height of the sampled image. Attention: These values
decrease if the image is subsampled!
Line scan cameras have 12 or 16 internal parameters depending on the camera type.
For reasons explained below, parameters that are marked with an * asterisk are fixed and not estimated
by the algorithm.
Line scan cameras with regular lenses
Projective line scan cameras with regular lenses
• ’line_scan_division’:
[’line_scan_division’, Focus, Kappa, Sx*, Sy*, Cx, Cy, ImageWidth, ImageHeight, Vx, Vy, Vz]
• ’line_scan_polynomial’:
[’line_scan_polynomial’, Focus, K1, K2, K3, P1, P2, Sx*, Sy*, Cx, Cy, ImageWidth, Image-
Height, Vx, Vy, Vz]
l[m]
Vy =
l[row]
With
HALCON 24.11.1.0
378 CHAPTER 6 CALIBRATION
If, compared to the first setup, the camera is rotated -20 degrees around the x-axis of the
camera coordinate system, the following initial values result:
Vxx = Vx = 0
Vyx = cos(−20◦ )Vy
Vzx = sin(−20◦ )Vy
The quality of the initial values for Vx , Vy , and Vz are crucial for the success of the whole
calibration. If they are not precise enough, the calibration may fail.
Note that for telecentric line scan cameras, the value of Vz has no influence on the image
position of 3D points and therefore cannot be determined. Consequently, Vz is not optimized
and left at its initial value for telecentric line scan cameras. Therefore, the initial value of Vz
should be set to 0. For setups with multiple telecentric line scan cameras that share a common
motion vector (for a detailed explanation, see Calibration / Multi-View), however, Vz can be
determined based on the camera poses. Therefore, in this case Vz is optimized.
Restrictions for internal camera parameters Note that the term focal length is not quite correct and
would be appropriate only for an infinite object distance. To simplify matters, always the term fo-
cal length is used even if the image distance is meant.
For all operators that use camera parameters as input the respective parameter values are checked as to
whether they fulfill the following restrictions:
Sx > 0
Sy ≥ 0
F ocus > 0
M agnif ication > 0
ImageW idth > 0
ImageHeight > 0
ImageP laneDist > 0
0 ≤ tilt < 90
0≤ rot < 360
Vx2 + Vy2 + Vz2 6= 0
For some operators the restrictions differ slightly. In particular, for operators that do not support line
scan cameras the following restriction applies:
Sy > 0
External camera parameters: The following 6 parameters describe the 3D pose, i.e., the position and orientation
of the world coordinate system relative to the camera coordinate system. The x- and y-axis of the camera
coordinate system are parallel to the column and row axes of the image, while the z-axis is perpendicular
to the image plane. For line scan cameras, the pose of the world coordinate system refers to the camera
coordinate system of the first image line.
The pose tuple contains one more element, which is the representation type of the pose. It codes the com-
bination of the parameters OrderOfTransform, OrderOfRotation, and ViewOfTransform. See
create_pose for more information about 3D poses.
When using a standard HALCON calibration plate, the world coordinate system is defined by the coordinate
system of the calibration plate. See the section “Calibration Object” above for further information.
If a HALCON calibration plate is used, you can use the operator find_calib_object to determine
initial values for all parameters. Using HALCON calibration plates with rectangularly arranged marks,
a combination of the two operators find_caltab and find_marks_and_pose will have the same
effect.
Parameter units: HALCON calibration plates use meters as unit. The camera parameters use corresponding
units. Of course, calibration can be done using different units, but in this case the related parameters have to
be adapted. Here, we list the HALCON default units for the different camera parameters:
Parameter Unit
External RotX, RotY, RotZ deg, deg, deg
TransX, TransY, TransZ m, m, m
Internal Cx, Cy px, px
Focus m
ImagePlaneDist m
ImageWidth, ImageHeight px, px
K1, K2, K3 m−2 , m−4 , m−6
Kappa (κ) m−2
P1, P2 m−1 , m−1
Magnification - (scalar)
Sx, Sy m/px, m/px
Tilt, Rot deg, deg
Vx, Vy, Vz m/scanline, m/scanline, m/scanline
How to obtain an appropriate calibration plate? You can obtain high-precision calibration plates in various
sizes and materials from your local distributor. These calibration plates come with associated description
files and can be easily extracted with find_calib_object.
It is also possible to use any arbitrary object for calibration. The only requirement is that the object has
characteristic points that can be robustly detected in the image and that the 3D world position of these points
is known with high accuracy. See the “Solution Guide III-C 3D Vision” for details.
Self-printed calibration objects are usually not accurate enough for high-precision applications.
How to take a set of suitable images? With the combination of lens (fixed focus setting!), camera, and frame
grabber to be calibrated, a set of images of the calibration plate must be taken (see open_framegrabber
and grab_image).
Your local distributor can provide you with two different types of standard HALCON calibration plates:
Calibration plates with hexagonally arranged marks (see create_caltab) and calibration plates with
rectangularly arranged marks (see gen_caltab). Since these two calibration plates substantially differ
from each other, in some cases additional particularities apply (see below).
The parameters and hints listed below should be considered when taking the calibration images. For a
successful calibration, the setup and the used set of images should have certain qualities. These qualities
may vary for the specific task and demand. In order to give guidance, values and hints suitable for a basic
monocular camera setup are mentioned.
HALCON 24.11.1.0
380 CHAPTER 6 CALIBRATION
• Aperture
The aperture of the camera must not be changed during the acquisition of the images. If the
aperture is changed after the calibration, the camera must be calibrated anew.
• Camera pose
The position of the camera must not be changed during the image acquisition.
• Focus
The calibration images should be sharply focused, i.e., transitions between objects should be
clearly delimited. The focus, respectively the focal length, must not be changed during the image
acquisition.
• Pattern coverage
How much of the calibration pattern must at least be contained in the images depends on the used
plate.
– Plate with hexagonally arranged marks: At least one finder pattern needs to be visible. If at
least two finder patterns are visible in the image, it is possible to detect whether the calibration
plate is mirrored or not. In a mirrored case, a suitable error will be returned.
– Plate with rectangularly arranged marks: Plate needs to be completely visible, as the finder
pattern is the frame surrounding the point marks.
Nevertheless, of course, the more of the calibration pattern is visible to the camera and the more
of the field of view is filled by the calibration plate, the better.
• Mark diameter
The marks of the calibration plates should have a diameter of at least 20 pixels in each image. This
requirement is essential for a successful calibration.
• Contrast
The contrast between the light and dark areas of the calibration plate should be at least 100 gray
values (regarding byte images).
• Overexposure
To avoid overexposed images, make sure that gray values of the light parts of the calibration plate
do not exceed 240 (regarding byte images), especially not in the neighborhood of the calibration
marks.
• Homogeneity
The calibration plate should be illuminated homogeneously and reflections should be avoided.
As a rule of thumb, the range of gray values of the light parts of the plate should not exceed 45
(regarding byte images).
• Image format
Calibration images should be saved in an uncompressed format. Compression artifacts which
occur, e.g., when using JPG format and high compression rates need to be avoided.
• Preprocessing
Calibration images should not be preprocessed. If image properties like contrast or focus are
insufficient (see above), the issues need to be resolved by adjusting the camera setup instead of
processing the images ahead of the calibration.
Which distortion model should be used? Two distortion models can be used: The division model and the poly-
nomial model. The division model uses one parameter to model the radial distortions while the polynomial
model uses five parameters to model radial and decentering distortions (see the sections “Camera parame-
ters” and “The Used 3D camera model”).
The advantages of the division model are that the distortions can be applied faster, especially the inverse
distortions, i.e., if world coordinates are projected into the image plane. Furthermore, if only few calibration
images are used or if the field of view is not covered sufficiently, the division model typically yields more
stable results than the polynomial model. The main advantage of the polynomial model is that it can model
the distortions more accurately because it uses higher order terms to model the radial distortions and because
it also models the decentering distortions. Note that the polynomial model cannot be inverted analytically.
Therefore, the inverse distortions must be calculated iteratively, which is slower than the calculation of the
inverse distortions with the (analytically invertible) division model.
Typically, the division model should be used for the calibration. If the accuracy of the calibration is not
high enough, the polynomial model can be used. Note, however, that the calibration sequence used for
the polynomial model must provide an even better coverage of the area in which measurements will later
be performed. The distortions may be modeled inaccurately outside of the area that was covered by the
calibration plate. This holds for the image border as well as for areas inside the field of view that were not
covered by the calibration plate.
Area scan pinhole camera: The combination of an area scan camera with a lens that effects a perspective projec-
tion on the object side of the lens and that may show radial and decentering distortions. The lens may be a
tilt lens, i.e., the optical axis of the lens may be tilted with respect to the camera’s sensor (this is sometimes
called a Scheimpflug lens). Since hypercentric lenses also perform a perspective projection, cameras with
hypercentric lenses are pinhole cameras. The models for regular (i.e., non-tilt) pinhole and image-side tele-
centric lenses are identical. In contrast, the models for pinhole and image-side telecentric tilt lenses differ
substantially, as described below.
HALCON 24.11.1.0
382 CHAPTER 6 CALIBRATION
Area scan telecentric camera: The combination of an area scan camera with a lens that is telecentric on the
object-side of the lens, i.e., that effects a parallel projection on the object-side of the lens, and that may
show radial and decentering distortions. The lens may be a tilt lens. The models for regular (i.e., non-tilt)
bilateral and object-side telecentric lenses are identical. In contrast, the models for bilateral and object-side
telecentric tilt lenses differ substantially, as described below.
Line scan pinhole camera: The combination of a line scan camera with a lens that effects a perspective projection
and that may show radial distortions. Tilt lenses are currently not supported for line scan cameras.
Line scan telecentric camera: The combination of a line scan camera with a lens that effects a telecentric pro-
jection and that may show radial distortions. Tilt lenses are currently not supported for line scan cameras.
To transform a 3D point pw = (xw , yw , zw )T which is given in world coordinates, into a 2D point qi = (r, c)T ,
which is given in pixel coordinates, a chain of transformations is needed:
pw → pc → qc → q̃c → qt → qi
pw 3D world point
pc Transformed into camera coordinate system
qc Projected into image plane (2D point, still in metric coordinates)
q̃c Lens distortion applied
qt If a tilted lens is used, the point q̃c is projected on the point qt in the tilted image plane. In this
case the distorted point q̃c only lies on a virtual image plane of a system without tilt.
qi Pixel coordinates
The following paragraphs describe these steps in more detail for area scan cameras and subsequently for line scan
cameras. For a even more detailed description of the different 3D camera models as well as some explanatory dia-
grams please refer to the chapter “Basics”, section “Camera Model and Parameters” in the “Solution Guide
III-C 3D Vision”.
Transformation step 1: pw → pc The point pw is transformed from world into camera coordinates (points as
homogeneous vectors, compare affine_trans_point_3d) by :
xc
pc yc
w
R T p
= c =
·
1 z 000 1 1
1
with R and T being the rotation and translation matrices (refer to the chapter “Basics”, section “3D Trans-
formations and Poses‘” in the “Solution Guide III-C 3D Vision” for detailed information).
Transformation step 2: pc → qc If the underlying camera model is an area scan pinhole camera, the projection
of pc = (xc , yc , zc )T into the image plane is described by the following equation:
f xc
u
qc = = c
v z yc
where f = Focus. For cameras with hypercentric lenses, the following equation holds instead:
−f xc
c u
q = = c
v z yc
where m = Magnification.
Transformation step 3: qc → q̃c For all types of cameras, the lens distortions can be modeled either by the
division model or by the polynomial model.
The division model uses one parameter Kappa to model the radial distortions.
The following equations transform the distorted image plane coordinates into undistorted image plane coor-
dinates if the division model is used:
u 1 ũ
=
v 1 + κ(ũ2 + ṽ 2 ) ṽ
These equations can be inverted analytically, which leads to the following equations that transform undis-
torted coordinates into distorted coordinates:
ũ 2 u
q̃c = =
ṽ v
p
1 + 1 − 4κ(u2 + v 2 )
The polynomial model uses three parameters (K1 , K2 , K3 ) to model the radial distortions and two param-
eters (P1 , P2 ) to model the decentering distortions.
The following equations transform the distorted image plane coordinates into undistorted image plane coor-
dinates if the polynomial model is used:
qt q̃c
t = H·
qw 1
t
where qw is the additional coordinate from the projective transformation of a homogeneous point.
h11 h12 h13 q11 q33 − q13 q31 q21 q33 − q23 q31 0
H = h21 h22 h23 = q12 q33 − q13 q32 q22 q33 − q23 q32 0
h31 h32 h33 q13 /d q23 /d q33
q11 q12 q13
Q = q21 q22 q23
q31 q32 q33
(cos ρ)2 (1 − cos τ ) + cos τ
cos ρ sin ρ(1 − cos τ ) sin ρ sin τ
= cos ρ sin ρ(1 − cos τ ) (sin ρ)2 (1 − cos τ ) + cos τ − cos ρ sin τ
− sin ρ sin τ cos ρ sin τ cos τ
HALCON 24.11.1.0
384 CHAPTER 6 CALIBRATION
For image-side telecentric tilt lenses and bilateral telecentric tilt lenses (which perform a parallel projec-
tion on the image side of the lens), the projection onto the tilted image plane is described by a linear 2D
transformation, i.e., by a 2 × 2 matrix:
h11 h12 1 q22 −q12
H= =
h21 h22 q11 q22 − q12 q21 −q21 q11
ṽ + C
r S y
qi = = y
c ũ + C
Sx x
For line scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON,
the following assumptions for this motion are made:
The motion is described by the motion vector V = (Vx , Vy , Vz )T that must be given in [meter/row] in the camera
coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact, this is
equivalent to the assumption of a fixed camera with the object traveling along −V .
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system
is the center of projection (for pinhole cameras) or the center of distortion (for telecentric cameras), respectively.
The z-axis is identical to the optical axis and directed so that the visible points have positive z coordinates. The
y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector has a positive
y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a right-handed
coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves
relatively to the object, i.e., each image line has been imaged from a different position. This means there would
be an individual pose for each image line. To make things easier, in HALCON all transformations from world
coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion
V is taken into account during the projection of the point pc into the image. Consequently, only the pose of the
first image line is computed by the operator find_calib_object (and stored by calibrate_cameras in
the calibration results).
For line scan cameras, the transformation from world to camera coordinates (pw → pc ) works in the same way.
Therefore, you can also apply transformation step 1 as described for area scan cameras above.
For line scan pinhole cameras, the projection of the point pc that is given in the camera coordinate system into
(sub-)pixel coordinates (r, c) in the image is modeled as follows:
Assuming
x
pc = y ,
z
m · u(ũ, pv ) = x − t · Vx
m · v(ũ, pv ) = y − t · Vy
m · Focus = z − t · Vz
were u(ũ, ṽ) and v(ũ, ṽ) are the undistortion functions that are described above for area scan cameras and pv =
−Sy · Cy .
For line scan telecentric cameras, the following set of equations must be solved for ũ and t:
u(ũ, pv )/Magnification = x − t · Vx
v(ũ, pv )/Magnification = y − t · Vy
with u(ũ, ṽ), v(ũ, ṽ) and pv as defined above. Note that neither z nor Vz influence the projection for telecentric
cameras.
The above formulas already include the compensation for image distortions.
Finally, the point is transformed into the image coordinate system, i.e., the pixel coordinate system:
i r t
q = = ũ .
c Sx + Cx
HALCON 24.11.1.0
386 CHAPTER 6 CALIBRATION
6.1 Binocular
subsumed by the parameter values ’cam_param1’ and ’cam_param2’ as well. Note that if the polynomial model is
used to model the lens distortions, the values ’k1_i’, ’k2_i’ and ’k3_i’ can be specified individually, whereas ’p1’
and ’p2’ can only be specified in the group ’poly_tan_2_i’ (with ’i’ indicating the index of the camera). ’poly_i’
specifies the group ’k1_i’, ’k2_i’, ’k3_i’ and ’poly_tan_2_i’.
The following list contains all possible strings that can be passed to the tuple:
In addition, parameters can be excluded from estimation by using the prefix ’~’. For example, the values
[’pose_rel’,’~transx_rel’] have the same effect as [’alpha_rel’,’beta_rel’,’gamma_rel’,’transy_rel’,’transz_rel’].
On the other hand, [’all’,’~focus1’] determines all internal and external parameters except the focus of camera
1, for instance. The prefix ’~’ can be used with all parameter values except ’all’.
The underlying camera model is explained in the chapter Calibration. The calibrated internal camera parameters
are returned in CamParam1 for camera 1 and in CamParam2 for camera 2.
The external parameters are returned analogously to camera_calibration, the 3D transformation poses
of the calibration model to the respective camera coordinate system (ccs) are returned in NFinalPose1 and
NFinalPose2. Thus, the poses are in the form ccs Pwcs , where wcs denotes the world coordinate system of the
3D calibration model (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). The
relative pose ccs1 Pccs2 , RelPose, specifies the transformation of points in ccs2 into ccs1. Therewith, the final
poses are related with each other (neglecting differences due to the balancing effects of the multi image calibration)
by:
HomMat3D_NFinalPose2 = INV(HomMat3D_RelPose) * HomMat3D_NFinalPose1,
where HomMat3D_* denotes a homogeneous transformation matrix of the respective poses and INV() inverts a
homogeneous matrix.
HALCON 24.11.1.0
388 CHAPTER 6 CALIBRATION
The computed average errors returned in Errors give an impression of the accuracy of the calibration. Using
the determined camera parameters, they denote the average euclidean distance between the projection of the mark
centers to their extracted image coordinates.
For cameras with telecentric lenses, additional conditions must be fulfilled for the setup. They can be found in the
chapter Calibration.
Attention
Stereo setups that contain cameras with and without hypercentric lenses at the same time are not supported. Fur-
thermore, stereo setups that contain area scan and line scan cameras at the same time are not supported.
Parameters
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered Tuple with all X-coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered Tuple with all Y-coordinates of the calibration marks (in meters).
Number of elements: NY == NX
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered Tuple with all Z-coordinates of the calibration marks (in meters).
Number of elements: NZ == NX
. NRow1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 1 (in pixels).
. NCol1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 1 (in pixels).
Number of elements: NCol1 == NRow1
. NRow2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all row-coordinates of the extracted calibration marks of camera 2 (in pixels).
Number of elements: NRow2 == NRow1
. NCol2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered Tuple with all column-coordinates of the extracted calibration marks of camera 2 (in pixels).
Number of elements: NCol2 == NRow1
. StartCamParam1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal parameters of camera 1.
. StartCamParam2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal parameters of camera 2.
. NStartPose1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 1.
Number of elements: NStartPose1 == 7 * NRow1 / NX
. NStartPose2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all initial values for the poses of the calibration model in relation to camera 2.
Number of elements: NStartPose2 == 7 * NRow1 / NX
. EstimateParams (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Camera parameters to be estimated.
Default: ’all’
List of values: EstimateParams ∈ {’all’, ’pose’, ’pose_caltabs’, ’pose_rel’, ’cam_param1’,
’cam_param2’, ’alpha_rel’, ’beta_rel’, ’gamma_rel’, ’transx_rel’, ’transy_rel’, ’transz_rel’, ’alpha_caltabs’,
’beta_caltabs’, ’gamma_caltabs’, ’transx_caltabs’, ’transy_caltabs’, ’transz_caltabs’, ’focus1’,
’magnification1’, ’kappa1’, ’poly_1’, ’k1_1’, ’k2_1’, ’k3_1’, ’poly_tan_2_1’, ’image_plane_dist1’, ’tilt1’,
’cx1’, ’cy1’, ’sx1’, ’sy1’, ’focus2’, ’magnification2’, ’kappa2’, ’poly_2’, ’k1_2’, ’k2_2’, ’k3_2’,
’poly_tan_2_2’, ’image_plane_dist2’, ’tilt2’, ’cx2’, ’cy2’, ’sx2’, ’sy2’, ’common_motion_vector’}
. CamParam1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .campar ; real / integer / string
Internal parameters of camera 1.
. CamParam2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .campar ; real / integer / string
Internal parameters of camera 2.
. NFinalPose1 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all poses of the calibration model in relation to camera 1.
Number of elements: NFinalPose1 == 7 * NRow1 / NX
. NFinalPose2 (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose(-array) ; real / integer
Ordered tuple with all poses of the calibration model in relation to camera 2.
Number of elements: NFinalPose2 == 7 * NRow1 / NX
HALCON 24.11.1.0
390 CHAPTER 6 CALIBRATION
RelPoseRect)
map_image (Image1, Map1, ImageMapped1)
map_image (Image2, Map2, ImageMapped2)
Result
binocular_calibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired pa-
rameters have been determined by the minimization algorithm. If necessary, an exception is raised.
Execution Information
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, write_cam_par, pose_to_hom_mat3d, disp_caltab,
gen_binocular_rectification_map
See also
find_caltab, sim_caltab, read_cam_par, create_pose, convert_pose_type, read_pose,
hom_mat3d_to_pose, create_caltab, binocular_disparity, binocular_distance
Module
3D Metrology
caltab_points ( : : CalPlateDescr : X, Y, Z )
Read the mark center points from the calibration plate description file.
caltab_points reads the mark center points from the calibration plate description file CalPlateDescr (see
gen_caltab for calibration plates with rectangularly arranged marks and create_caltab for calibration
plates with hexagonally arranged marks) and returns their coordinates in X, Y and Z. The mark center points are
3D coordinates in the calibration plate coordinate system and describe the 3D model of the calibration plate. The
calibration plate coordinate system is located in the middle of the surface of the calibration plate for calibration
plates with rectangularly arranged marks and at the center of the central mark of the first finder pattern for calibra-
tion plates with hexagonally arranged marks. Its z-axis points into the calibration plate, its x-axis to the right, and
its y-axis downwards.
The mark center points are typically used as input parameters for the operator camera_calibration.
Parameters
Result
caltab_points returns 2 (H_MSG_TRUE) if all parameter values are correct and the file CalPlateDescr
has been read successfully. If necessary, an exception is raised.
Execution Information
Possible Successors
camera_calibration
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab,
project_3d_point, get_line_of_sight, gen_caltab
Module
Foundation
Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with hexag-
onally arranged marks.
create_caltab creates the description file of a standard HALCON calibration plate with hexagonally arranged
marks. This calibration plate contains MarksPerRow times NumRows circular marks. These marks are arranged
in a hexagonal lattice such that each mark (except the ones at the border) has six equidistant neighbors.
HALCON 24.11.1.0
392 CHAPTER 6 CALIBRATION
A standard HALCON calibration plate with hexagonally arranged marks and its coordinate system.
The diameter of the marks is given by the parameter Diameter in meters. The distance between the centers of
horizontally neighboring calibration marks is given
√ by 2 · Diameter . The distance between neighboring rows of
calibration marks is given by 2 · Diameter · 0.75 . The width and the height of the generated calibration plate
can be calculated with the following equations:
Width = (2 MarksPerRow−1
2 + 3) · 2 · Diameter
NumRows−1 √
Height = (2 2 · 3 + 5) · Diameter
The calibration plate contains one to five finder patterns. A finder pattern is a special mark hexagon (i.e. a mark and
its six neighbors) where either four or six marks contain a hole. Each of these up to five finder patterns is unique
such that it can be used to determine the orientation of the calibration plate and the position of the finder pattern
on the calibration plate. As a consequence, the calibration plate can only be found by find_calib_object
if at least one of these finder patterns is completely visible. The position of the central mark of each finder
pattern is given in FinderRow and FinderColumn. Thus, the length of the tuples given in FinderRow and
FinderColumn, respectively determine the number of finder patterns on the calibration plate. Be aware that two
finder patterns must not overlap. It is recommended to keep a certain distance between the finder patterns, so every
mark containing a hole can be assigned to a finder pattern distinctly. As a rule of thumb, if the calibration plate
contains too few marks to place all finder patterns in distinct positions, it is better to reduce the number of finder
patterns so that they can be distributed more evenly. An example case is depicted below, but note that a successful
detection of the patterns also depends on the used camera setup.
The coordinate system of the calibration plate is located in the center of the central mark of the first finder pattern.
The finder patterns on a calibration plate should not be too close to each other (left). If there are not enough marks
on your plate to distribute the finder patterns further apart you should reduce the number of finder patterns (right).
Depending on Polarity the marks are either light on dark background (for ’light_on_dark’, which is the default)
or dark on light background (for ’dark_on_light’).
The file CalPlateDescr contains the calibration plate description, and must be passed to all HALCON opera-
tions using the generated calibration plate (e.g., set_calib_data_calib_object or sim_caltab). The
default HALCON file extension for the description of a calibration plate with hexagonally arranged marks is ’cpd’.
A calibration plate description file contains information about:
A file generated by create_caltab looks like the following (comments are marked by a ’#’ at the beginning
of a line):
\# 27 rows x 31 columns
\# Width, height of calibration plate [meter]: 0.170323, 0.129118
\# Distance between mark centers [meter]: 0.0051613
HALCON 24.11.1.0
394 CHAPTER 6 CALIBRATION
\# calibration marks at y = 0 m
...
HALCON 24.11.1.0
396 CHAPTER 6 CALIBRATION
Note that only the coordinates and radius of the marks in the first two rows are listed completely. The corresponding
coordinates and radius of the marks in the other rows are omitted for a better overview.
The file CalPlatePSFile contains the corresponding PostScript description of the calibration plate, which can
be used to print the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), a printed calibration plate may not
match the values in the calibration plate description file CalPlateDescr exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
For purchased calibration plates it is recommended to use the specific calibration description file that is supplied
with your calibration plate.
Parameters
. NumRows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of rows.
Default: 27
Recommended increment: 1
Restriction: NumRows > 2
Result
create_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written
successfully. If necessary, an exception is raised.
Execution Information
Possible Successors
read_cam_par, caltab_points
Alternatives
gen_caltab
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab
Module
Foundation
HALCON 24.11.1.0
398 CHAPTER 6 CALIBRATION
Project and visualize the 3D model of the calibration plate in the image.
disp_caltab is used to visualize the calibration marks and the connecting lines between the marks of the used
calibration plate (CalPlateDescr) in the window specified by WindowHandle. Additionally, the x- and
y-axes of the plate’s coordinate system are printed on the plate’s surface. For this, the 3D model of the calibra-
tion plate is projected into the image plane using the internal (CameraParam) and external camera parameters
(CalPlatePose). Thereby the pose is in the form ccs Pwcs , where ccs denotes the camera coordinate sys-
tem and wcs the world coordinate system (see Transformations / Poses and “Solution Guide III-C - 3D
Vision”), thus the pose of the calibration plate in camera coordinates. The underlying camera model is described
in Calibration.
Typically, disp_caltab is used to verify the result of the camera calibration (see Calibration or
camera_calibration) by superimposing it onto the original image. The current line width can be set by
set_line_width, the current color can be set by set_color. Additionally, the font type of the labels of the
coordinate axes can be set by set_font.
The parameter ScaleFac influences the number of supporting points to approximate the elliptic contours of the
calibration marks. You should increase the number of supporting points, if the image part in the output window is
displayed with magnification (see set_part).
Parameters
. WindowHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . window ; handle
Window in which the calibration plate should be visualized.
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default: ’calplate_320.cpd’
List of values: CalPlateDescr ∈ {’calplate_5mm.cpd’, ’calplate_10mm.cpd’, ’calplate_20mm.cpd’,
’calplate_40mm.cpd’, ’calplate_80mm.cpd’, ’calplate_160mm.cpd’, ’calplate_320mm.cpd’,
’calplate_640mm.cpd’, ’calplate_1200mm.cpd’, ’calplate_20mm_dark_on_light.cpd’,
’calplate_40mm_dark_on_light.cpd’, ’calplate_80mm_dark_on_light.cpd’, ’caltab_650um.descr’,
’caltab_2500um.descr’, ’caltab_6mm.descr’, ’caltab_10mm.descr’, ’caltab_30mm.descr’,
’caltab_100mm.descr’, ’caltab_200mm.descr’, ’caltab_800mm.descr’, ’caltab_small.descr’,
’caltab_big.descr’}
File extension: .cpd, .descr
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CalPlatePose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
External camera parameters (3D pose of the calibration plate in camera coordinates).
Number of elements: 7
. ScaleFac (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Scaling factor for the visualization.
Default: 1.0
Suggested values: ScaleFac ∈ {0.5, 1.0, 2.0, 3.0}
Recommended increment: 0.05
Restriction: 0.0 < ScaleFac
Example
Result
disp_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception is raised.
Execution Information
Find the HALCON calibration plate and set the extracted points and contours in a calibration data model.
find_calib_object searches in Image for a HALCON calibration plate corresponding to the description
of the calibration object with the index CalibObjIdx from the calibration data model CalibDataID. If a
calibration plate is found, find_calib_object extracts the centers and the contours of its marks and estimates
the pose of the plate relative to the observing camera CameraIdx. All collected observation data is stored in
the calibration data model for the calibration object pose CalibObjPoseIdx. In order to ensure a successful
detection of the calibration plate, at least one finder pattern has to be visible in the image. For calibration plates
with hexagonally arranged marks this is a special mark hexagon where either four or six marks contain a hole,
while for calibration plates with rectangularly arranged marks this is the border of the calibration plate with a
triangle in one corner.
Preparation of the input data
Before the operator find_calib_object can be called, a calibration data model has to be defined performing
the following steps:
1. Create a calibration data model with the operator create_calib_data, specifying the number of
cameras in the setup and the number of used calibration objects.
2. Specify the camera type and the initial internal camera parameters for all cameras with the operator
set_calib_data_cam_param. Note that only cameras of the same type can be calibrated in a single
setup.
3. Specify the description of all calibration objects with the operator
set_calib_data_calib_object. Note that for a successful call of find_calib_object a
valid description file of the calibration plate is necessary. This description file has to be set beforehand
via the operator set_calib_data_calib_object. As a consequence, the usage of a user-defined
calibration object can only be made by the operator set_calib_data_observ_points.
HALCON 24.11.1.0
400 CHAPTER 6 CALIBRATION
gen_caltab) the rim of the calibration plate is added to the observations, calibration plates with hexagonal
pattern (see create_caltab) store one of their finder pattern. Additionally and irrespective of the used calibra-
tion plate, the contour of each mark is added to the calibration model.
Setting additional parameters
Using calibration plates with hexagonally arranged marks, the following additional parameter can be set via
GenParamName and GenParamValue:
’sigma’: Smoothing factor for the extraction of the mark contours. For increasing values of ’sigma’, the filter
width and thereby the amount of smoothing increases (see also edges_sub_pix for the influence of the
filter width on the Canny filter).
Suggested values: 0.5, 0.7, 0.9, 1.0, 1.2, 1.5
Default: 1.0
For calibration plates with rectangularly arranged marks, find_calib_object essentially en-
capsulates the sequence of three operator calls: find_caltab, find_marks_and_pose and
set_calib_data_observ_points. For this kind of calibration plates the following parameters can be
set using GenParamName and GenParamValue:
’alpha’: Smoothing factor for the extraction of the mark contours. For increasing values of ’alpha’, the filter width
and thereby the amount of smoothing decreases (see also edges_sub_pix for the influence of the filter
width on the Lanser2 filter ).
Suggested values: 0.5, 0.7, 0.9, 1.0, 1.2, 1.5
Default: 0.9
’gap_tolerance’: Tolerance factor for gaps between the marks. If the marks appear closer to each other than
expected, you might set ’gap_tolerance’ < 1.0 to avoid disturbing patterns outside the calibration plate to be
associated with the calibration plate. This can typically happen if the plate is strongly tilted and positioned
in front of a background that exposes mark-like patterns. If the distances between single marks vary in a
wide range, e.g., if the calibration plate appears with strong perspective distortion in the image, you might
set ’gap_tolerance’ > 1.0 to enforce the marks grouping (see also find_caltab).
Suggested values: 0.75, 0.9, 1.0, 1.1, 1.2, 1.5
Default: 1.0
’max_diam_marks’: Maximum expected diameter of the marks (needed internally by
find_marks_and_pose). By default, this value is estimated by the preceding internal call to
find_caltab. However, if the estimation is erroneous for no obvious reason or the internal call to
find_caltab fails or is simply skipped (see ’skip_find_caltab’ below), you might have to adjust this
value.
Suggested values: 50.0, 100.0, 150.0, 200.0, 300.0
’skip_find_caltab’: Skip the internal call to find_caltab. If activated, only the domain of Image reduces the
search area for the internal call of find_marks_and_pose. Thus, a user defined calibration plate region
can be incorporated by setting ’skip_find_caltab’=’false’ and reducing the Image domain to the user region.
List of values: ’false’, ’true’
Default: ’false’
Segment the region of a standard calibration plate with rectangularly arranged marks in the image.
find_caltab is used to determine the region of a plane calibration plate with circular marks in the input image
Image. The region must correspond to a standard calibration plate with rectangularly arranged marks described in
the file CalPlateDescr. The successfully segmented region is returned in CalPlate. The operator provides
two algorithms. By setting appropriate integer values in SizeGauss, MarkThresh, and MinDiamMarks,
respectively, you invoke the standard algorithm. If you pass a tuple of parameter names in SizeGauss and a
corresponding tuple of parameter values in MarkThresh, or just two empty tuples, respectively, you invoke the
advanced algorithm instead. In this case the value passed in MinDiamMarks is ignored.
HALCON 24.11.1.0
402 CHAPTER 6 CALIBRATION
Standard algorithm
First, the input image is smoothed (see gauss_image); the size of the used filter mask is given by SizeGauss.
Afterwards, a threshold operator (see threshold) with a minimum gray value MarkThresh is applied. Among
the extracted connected regions the most convex region with an almost correct number of holes (corresponding to
the dark marks of the calibration plate) is selected. Holes with a diameter smaller than the expected size of the
marks MinDiamMarks are eliminated to reduce the impact of noise. The number of marks is read from the
calibration plate description file CalPlateDescr. The complete explanation of this file can be found within the
description of gen_caltab.
Advanced algorithm
First, an image pyramid based on Image is built. Starting from the highest pyramid level, round regions are
segmented with a dynamic threshold. Then, they are associated in groups based on their mutual proximity and it
is evaluated whether they can represent marks of a potential calibration plate. The search is terminated once the
expected number of marks has been identified in one group. The surrounding lighter area is returned in CalPlate.
The image pyramid makes the search independent from the size of the image and the marks. The dynamic threshold
makes the algorithm immune to bad or irregular illumination. Therefore, in general, no parameter is required. Yet,
you can adjust some auxiliary parameters of the advanced algorithm by passing a list of parameter names (strings)
to SizeGauss and a list of corresponding parameter values to MarkThresh. Currently the following parameter
is supported:
’gap_tolerance’: Tolerance factor for gaps between the marks. If the marks appear closer to each other than
expected, you might set ’gap_tolerance’ < 1.0 to avoid disturbing patterns outside the calibration plate to be
associated with the calibration plate. This can typically happen if the plate is strongly tilted and positioned
in front of a background that exposes mark-like patterns. If the distances between single marks deviate
significantly, e.g., if the calibration plate appears with strong perspective distortion in the image, you might
set ’gap_tolerance’ > 1.0 to enforce the grouping for the more distant marks.
Suggested values: 0.75, 0.9, 1.0, 1.1, 1.2, 1.5
Default: 1.0
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage(-array) ; object : byte / uint2
Input image.
. CalPlate (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; object
Output region.
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default: ’caltab_100.descr’
List of values: CalPlateDescr ∈ {’caltab_650um.descr’, ’caltab_2500um.descr’, ’caltab_6mm.descr’,
’caltab_10mm.descr’, ’caltab_30mm.descr’, ’caltab_100mm.descr’, ’caltab_200mm.descr’,
’caltab_800mm.descr’, ’caltab_small.descr’, ’caltab_big.descr’}
File extension: .descr
. SizeGauss (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Filter size of the Gaussian.
Default: 3
List of values: SizeGauss ∈ {0, 3, 5, 7, 9, 11, ’gap_tolerance’}
. MarkThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / real
Threshold value for mark extraction.
Default: 112
Suggested values: MarkThresh ∈ {48, 64, 80, 96, 112, 128, 144, 160, 0.5, 0.9, 1.0, 1.1, 1.5}
. MinDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Expected minimal diameter of the marks on the calibration plate.
Default: 5
Suggested values: MinDiamMarks ∈ {3, 5, 9, 15, 30, 50, 70}
Example
Result
find_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and an image region is
found. The behavior in case of empty input (no image given) can be set via set_system(::
’no_object_result’,<Result>:) and the behavior in case of an empty result region via set_system
(::’store_empty_region’,<’true’/’false’>:). If necessary, an exception is raised.
Execution Information
Extract rectangularly arranged 2D calibration marks from the image and calculate initial values for the external
camera parameters.
find_marks_and_pose is used to determine the input data for a subsequent camera calibration using a calibra-
tion plate with rectangularly arranged marks (see Calibration or camera_calibration): First, the 2D center
points [RCoord,CCoord] of the calibration marks within the region CalPlateRegion of the input image
Image are extracted and ordered. Secondly, a rough estimate for the external camera parameters (StartPose)
is computed, i.e., the 3D pose (= position and orientation) of the calibration plate relative to the camera coordinate
system (see create_pose for more information about 3D poses).
In the input image Image an edge detector is applied (see edges_image, mode ’lanser2’) to the region
CalPlateRegion, which can be found by applying the operator find_caltab. The filter parameter for
this edge detection can be tuned via Alpha. Use a smaller value for Alpha to achieve a stronger smoothing
effect. In the edge image closed contours are searched for: The number of closed contours must correspond to
the number of calibration marks as described in the calibration plate description file CalPlateDescr and the
contours have to be elliptically shaped. Contours shorter than MinContLength are discarded, just as contours
enclosing regions with a diameter larger than MaxDiamMarks (e.g., the border of the calibration plate).
For the detection of contours a threshold operator is applied on the resulting amplitudes of the edge detector. All
points with a high amplitude (i.e., borders of marks) are selected.
First, the threshold value is set to StartThresh. If the search for the closed contours or the successive pose
estimate fails, this threshold value is successively decreased by DeltaThresh down to a minimum value of
MinThresh.
Each of the found contours is refined with subpixel accuracy (see edges_sub_pix) and subsequently approxi-
mated by an ellipse. The center points of these ellipses represent a good approximation of the desired 2D image
coordinates [RCoord,CCoord] of the calibration mark center points. The order of the values within these two tu-
ples must correspond to the order of the 3D coordinates of the calibration marks in the calibration plate description
file CalPlateDescr, since this fixes the correspondences between extracted image marks and known model
marks (given by caltab_points)! If a triangular orientation mark is defined in a corner of the plate by the
HALCON 24.11.1.0
404 CHAPTER 6 CALIBRATION
plate description file (see gen_caltab), the mark will be detected and the point order is returned in row-major
order beginning with the corner mark in the (barycentric) negative quadrant with respect to the defined coordinate
system of the plate. Else, if no orientation mark is defined, the order of the center points is in row-major order
beginning at the upper left corner mark in the image.
Based on the ellipse parameters for each calibration mark, a rough estimate for the external camera parameters is
finally computed. For this purpose the fixed correspondences between extracted image marks and known model
marks are used. The estimate StartPose describes the pose of the calibration plate in the camera coordinate
system as required by the operator camera_calibration.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . singlechannelimage ; object : byte / uint2
Input image.
. CalPlateRegion (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . region ; object
Region of the calibration plate.
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name of the calibration plate description.
Default: ’caltab_100.descr’
List of values: CalPlateDescr ∈ {’caltab_650um.descr’, ’caltab_2500um.descr’, ’caltab_6mm.descr’,
’caltab_10mm.descr’, ’caltab_30mm.descr’, ’caltab_100mm.descr’, ’caltab_200mm.descr’,
’caltab_800mm.descr’, ’caltab_small.descr’, ’caltab_big.descr’}
File extension: .descr
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal camera parameters.
. StartThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Initial threshold value for contour detection.
Default: 128
Suggested values: StartThresh ∈ {80, 96, 112, 128, 144, 160}
Restriction: StartThresh > 0
. DeltaThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Loop value for successive reduction of StartThresh.
Default: 10
Suggested values: DeltaThresh ∈ {6, 8, 10, 12, 14, 16, 18, 20, 22}
Restriction: DeltaThresh > 0
. MinThresh (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Minimum threshold for contour detection.
Default: 18
Suggested values: MinThresh ∈ {8, 10, 12, 14, 16, 18, 20, 22}
Restriction: MinThresh > 0
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Filter parameter for contour detection, see edges_image.
Default: 0.9
Suggested values: Alpha ∈ {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.1}
Value range: 0.2 ≤ Alpha ≤ 2.0
Restriction: Alpha > 0.0
. MinContLength (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Minimum length of the contours of the marks.
Default: 15.0
Suggested values: MinContLength ∈ {10.0, 15.0, 20.0, 30.0, 40.0, 100.0}
Restriction: MinContLength > 0.0
. MaxDiamMarks (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum expected diameter of the marks.
Default: 100.0
Suggested values: MaxDiamMarks ∈ {50.0, 100.0, 150.0, 200.0, 300.0}
Restriction: MaxDiamMarks > 0.0
. RCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Tuple with row coordinates of the detected marks.
. CCoord (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Tuple with column coordinates of the detected marks.
Result
find_marks_and_pose returns 2 (H_MSG_TRUE) if all parameter values are correct and an estimation for
the external camera parameters has been determined successfully. If necessary, an exception is raised.
Execution Information
Generate a calibration plate description file and a corresponding PostScript file for a calibration plate with rectan-
gularly arranged marks.
gen_caltab generates the description of a standard HALCON calibration plate with rectangularly arranged
marks. This calibration plate consists of XNum times YNum black circular marks on a white plane which are
surrounded by a black frame.
HALCON 24.11.1.0
406 CHAPTER 6 CALIBRATION
The marks are arranged in a rectangular grid with YNum and XNum equidistant rows and columns. The distances
between these rows and columns defines the parameter MarkDist in meter. The marks’ diameter can be set by the
parameter DiameterRatio and is defined by the equation Diameter = MarkDist · DiameterRatio . Using a
distance between marks of 0.01 m and a diameter ratio of 0.5, the width of the dark surrounding frame becomes 8
cm, and the radius of the marks is set to 2.5 mm. The coordinate system of the calibration plate is located in the
barycenter of all marks, its z-axis points into the calibration plate, its x-axis to the right, and its y-axis downwards.
The black frame of the calibration plate encloses a triangular black orientation mark in the top left corner to
uniquely determine the position of the calibration plate. The width and the height of the generated calibration plate
can be calculated with the following equations:
Width = MarkDist · (XNum + 1)
Height = MarkDist · (YNum + 1)
The file CalPlateDescr contains the calibration plate description, e.g., the number of rows and columns of the
calibration plate, the geometry of the surrounding frame (see find_caltab), the triangular orientation mark, an
offset of the coordinate system to the plate’s surface in z-direction, and the x,y coordinates and the radius of all
calibration plate marks given in the calibration plate coordinate system. The definition of the orientation and the
offset, indicated by t and z, is optional and can be commented out. The default HALCON file extension for the
calibration plate description is ’descr’. A file generated by gen_caltab looks like the following (comments are
marked by a ’#’ at the beginning of a line):
HALCON 24.11.1.0
408 CHAPTER 6 CALIBRATION
The file CalPlatePSFile contains the corresponding PostScript description of the calibration plate.
Attention
Depending on the accuracy of the used output device (e.g., laser printer), the printed calibration plate may not
match the values in the calibration plate description file CalPlateDescr exactly. Thus, the coordinates of the
calibration marks in the calibration plate description file may have to be corrected!
Parameters
. XNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of marks in x direction.
Default: 7
Suggested values: XNum ∈ {5, 7, 9}
Recommended increment: 1
Restriction: XNum > 1
. YNum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of marks in y direction.
Default: 7
Suggested values: YNum ∈ {5, 7, 9}
Recommended increment: 1
Restriction: YNum > 1
. MarkDist (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Distance of the marks in meters.
Default: 0.0125
Suggested values: MarkDist ∈ {0.1, 0.0125, 0.00375, 0.00125}
Restriction: 0.0 < MarkDist
. DiameterRatio (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Ratio of the mark diameter to the mark distance.
Default: 0.5
Suggested values: DiameterRatio ∈ {0.5, 0.55, 0.6, 0.65}
Restriction: 0.0 < DiameterRatio < 1.0
. CalPlateDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the calibration plate description.
Default: ’caltab.descr’
List of values: CalPlateDescr ∈ {’caltab.descr’}
File extension: .descr
. CalPlatePSFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of the PostScript file.
Default: ’caltab.ps’
File extension: .ps
Example
Result
gen_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct and both files have been written
successfully. If necessary, an exception is raised.
Execution Information
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, sim_caltab
Module
Foundation
HALCON 24.11.1.0
410 CHAPTER 6 CALIBRATION
Result
sim_caltab returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception is raised.
Execution Information
Possible Predecessors
camera_calibration, find_marks_and_pose, read_pose, read_cam_par,
hom_mat3d_to_pose
Possible Successors
find_caltab
See also
find_caltab, find_marks_and_pose, camera_calibration, disp_caltab, create_pose,
hom_mat3d_to_pose, project_3d_point, gen_caltab
Module
Calibration
Result
If the parameters are valid, the operator cam_mat_to_cam_par returns the value 2 (H_MSG_TRUE). If neces-
sary an exception is raised.
Execution Information
Possible Predecessors
stationary_camera_self_calibration
See also
camera_calibration, cam_par_to_cam_mat
Module
Calibration
HALCON 24.11.1.0
412 CHAPTER 6 CALIBRATION
Result
If the parameters are valid, the operator cam_par_to_cam_mat returns the value 2 (H_MSG_TRUE). If neces-
sary an exception is raised.
Execution Information
Result
read_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been read success-
fully. If necessary an exception is raised.
HALCON 24.11.1.0
414 CHAPTER 6 CALIBRATION
Execution Information
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_cam_par
Module
Foundation
The default HALCON file extension for the camera parameters is ’dat’.
The internal camera parameters can be later read with read_cam_par.
Parameters
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CamParFile (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name of internal camera parameters.
Default: ’campar.dat’
List of values: CamParFile ∈ {’campar.dat’, ’campar.initial’, ’campar.final’}
File extension: .dat
Example
*
* Calibrate the camera.
*
StartCamPar := ['area_scan_division', 0.016, 0, 0.0000074, 0.0000074, \
326, 247, 652, 494]
create_calib_data ('calibration_object', 1, 1, CalibDataID)
set_calib_data_cam_param (CalibDataID, 0, [], StartCamPar)
set_calib_data_calib_object (CalibDataID, 0, 'caltab_30mm.descr')
NumImages := 10
for I := 1 to NumImages by 1
read_image (Image, '3d_machine_vision/calib/calib_' + I$'02d')
find_calib_object (Image, CalibDataID, 0, 0, I, [], [])
get_calib_data_observ_contours (Caltab, CalibDataID, 'caltab', 0, 0, I)
endfor
calibrate_cameras (CalibDataID, Error)
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Write the internal camera parameters to a file.
write_cam_par (CamParam, 'camera_parameters.dat')
Result
write_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct and the file has been written
successfully. If necessary an exception is raised.
Execution Information
6.4 Hand-Eye
HALCON 24.11.1.0
416 CHAPTER 6 CALIBRATION
The operator calibrate_hand_eye determines the 3D pose of a robot (“hand”) relative to a camera or 3D
sensor (“eye”) based on the calibration data model CalibDataID. With the determined 3D poses, the poses of
the calibration object in the camera coordinate system can be transformed into the coordinate system of the robot
which can then, e.g., grasp an inspected part. There are two possible configurations of robot-camera (hand-eye)
systems: The camera can be mounted on the robot or be stationary and observe the robot. Note that the term robot
is used in place of a mechanism that moves objects. Thus, you can use calibrate_hand_eye to calibrate
many different systems, from pan-tilt heads to multi-axis manipulators.
In essence, systems suitable for hand-eye calibration are described by a closed chain of four Euclidean transforma-
tions. In this chain two non-consecutive transformations are either known from the robot controller or computed
from camera data, e.g., calibration object poses observed by a camera. The two unknown constant transformations
are computed by the hand-eye calibration procedure.
A hand-eye calibration is performed similarly to the calibration of the external camera parameters (see Calibration):
You acquire a set of poses of a calibration object in the camera coordinate system, and a corresponding set of poses
of the tool in robot base coordinates and set them in the calibration data model CalibDataID.
In contrast to the camera calibration, the calibration object is not moved manually. This task is delegated to
the robot. Basically, two hand-eye calibration scenarios can be distinguished. A robot either moves the camera
(moving camera) or it moves the calibration object (stationary camera). The robot’s movements are assumed
to be known. They are used as an input for the hand-eye calibration and are set in the calibration data model
CalibDataID using set_calib_data.
The results of a hand-eye calibration are two poses: For the moving camera scenario, the 3D pose of the tool in
the camera coordinate system (’tool_in_cam_pose’) and the 3D pose of the calibration object in the robot base
coordinate system (’obj_in_base_pose’) are computed. For the stationary camera scenario, the 3D pose of the
robot base in the camera coordinate system (’base_in_cam_pose’) and the 3D pose of the calibration object in the
tool coordinate system (’obj_in_tool_pose’) are computed. Their pose type is identical to the pose type of the input
poses. If the input poses have different pose types, poses of type 0 are returned.
The two hand-eye calibration scenarios are discussed in more detail below, followed by general information about
the data for and the preparation of the calibration data model.
Moving camera (mounted on a robot)
In this configuration, the calibration object remains stationary. The camera is mounted on the robot and is moved
to different positions by the robot. The main idea behind the hand-eye calibration is that the information extracted
from an observation of the calibration object, i.e., the pose of the calibration object relative to the camera, can be
seen as a chain of poses or homogeneous transformation matrices from the calibration object via the base of the
robot to its tool (end-effector) and finally to the camera:
From the set of calibration object poses (’obj_in_cam_pose’) and the poses of the tool in the robot base coordi-
nate system (’tool_in_base_pose’), the operator calibrate_hand_eye determines the two missing transfor-
mations at the ends of the chain, i.e., the pose of the robot tool in the camera coordinate system (camera Htool ,
’tool_in_cam_pose’) and the pose of the calibration object in the robot base coordinate system (base Hcal ,
’obj_in_base_pose’). These two poses are constant.
In contrast, the transformation in the middle of the chain, base Htool , is known but changes for each observation of
the calibration object, because it describes the pose of the tool with respect to the robot base coordinate system. In
the equation the inverted transformation matrix is used. The inversion is performed internally.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of ’obj_in_base_pose’.
To eliminate this ambiguity the Z translation ’obj_in_base_pose’ is internally set to 0.0 and the ’tool_in_cam_pose’
is calculated accordingly. It is necessary to determine the true translation in Z after the calibration by moving the
robot to a pose of known height in the camera coordinate system. For this, the following approach can be applied:
The calibration plate is placed at an arbitrary position. The robot is then moved such that the camera can observe
the calibration plate. Now, an image of the calibration plate is acquired and the current robot pose is queried
(ToolInBasePose1). From the image, the pose of the calibration plate in the camera coordinate system can be
determined (ObjInCamPose1). Afterwards, the tool of the robot is manually moved to the origin of the calibration
plate and the robot pose is queried again (ToolInBasePose2). These three poses and the result of the calibration
(ToolInCamPose) can be used to fix the Z ambiguity by using the following lines of code:
pose_invert(ToolInCamPose, CamInToolPose)
pose_compose(CamInToolPose, ObjInCamPose1, ObjInToolPose1)
pose_invert(ToolInBasePose1, BaseInToolPose1)
pose_compose(BaseInToolPose1, ToolInBasePose2, Tool2InTool1Pose)
ZCorrection := ObjInToolPose1[2]-Tool2InTool1Pose[2]
set_origin_pose(ToolInCamPose, 0, 0, ZCorrection,
ToolInCamPoseFinal)
The ’optimization_method’ ’stochastic’ also estimates the uncertainty of observations. Besides the input poses
described above, it also uses the extracted calibration marks and is thus only available for use with a camera and a
calibration plate, not for use with a 3D sensor. For articulated robots, the hand-eye poses and camera parameters
are refined simultaneously.
Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from an observation of the calibration object, i.e., the pose of the calibration object in the
camera coordinate system (e.g., the external camera parameters), are equal to a chain of poses or homogeneous
transformation matrices, this time from the calibration object via the robot’s tool to its base and finally to the
camera:
camera
Stationary camera: Hcal = camera Hbase · base Htool · tool Hcal
* * 6 YH
H
H
’obj_in_cam_pose’ ’base_in_cam_pose’ ’tool_in_base_pose’ ’obj_in_tool_pose’
Analogously to the configuration with a moving camera, the operator calibrate_hand_eye determines the
two transformations at the ends of the chain, here the pose of the robot base coordinate system in camera coordi-
nates (camera Hbase , ’base_in_cam_pose’) and the pose of the calibration object relative to the robot tool (tool Hcal ,
’obj_in_tool_pose’).
The transformation in the middle of the chain, base Htool , describes the pose of the tool relative to the robot base
coordinate system. The transformation camera Hcal describes the pose of the calibration object relative to the
camera coordinate system.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of ’obj_in_tool_pose’.
To eliminate this ambiguity the Z translation of ’obj_in_tool_pose’ is internally set to 0.0 and the
’base_in_cam_pose’ is calculated accordingly. It is necessary to determine the true translation in Z after the
calibration by moving the robot to a pose of known height in the camera coordinate system. For this, the following
approach can be applied: A calibration plate (that is not attached to the robot) is placed at an arbitrary position
such that it can be observed by the camera. The pose of the calibration plate must then be determined in the cam-
era coordinate system (ObjInCamPose). Afterwards the tool of the robot is manually moved to the origin of the
calibration plate and the robot pose is queried (ToolInBasePose). The two poses and the result of the calibration
(BaseInCamPose) can be used to fix the Z ambiguity by using the following lines of code:
pose_invert(BaseInCamPose, CamInBasePose)
pose_compose(CamInBasePose, ObjInCamPose, ObjInBasePose)
ZCorrection := ObjInBasePose[2]-ToolInBasePose[2]
set_origin_pose(BaseInCamPose, 0, 0, ZCorrection,
BaseInCamPoseFinal)
The ’optimization_method’ ’stochastic’ also estimates the uncertainty of observations. Besides the input poses
described above, it also uses the extracted calibration marks and is thus only available for use with a camera and a
HALCON 24.11.1.0
418 CHAPTER 6 CALIBRATION
calibration plate, not for use with a 3D sensor. For articulated robots, the hand-eye poses and camera parameters
are refined simultaneously.
Preparing the calibration input data
Before calling calibrate_hand_eye, you must create and fill the calibration data model with the following
steps:
1. Create a calibration data model with the operator create_calib_data, specifying the num-
ber of cameras in the setup and the number of used calibration objects. Depending on your
scenario, CalibSetup has to be set to ’hand_eye_moving_camera’, ’hand_eye_stationary_camera’,
’hand_eye_scara_moving_camera’, or ’hand_eye_scara_stationary_camera’. These four scenarios on the
one hand distinguish whether the camera or the calibration object is moved by the robot and on the other
hand distinguish whether an articulated robot or a SCARA robot is calibrated. The arm of an articulated
robot has three rotary joints typically covering 6 degrees of freedom (3 translations and 3 rotations). SCARA
robots have two parallel rotary joints and one parallel prismatic joint covering only 4 degrees of freedom
(3 translations and 1 rotation). Loosely speaking, an articulated robot is able to tilt its end effector while a
SCARA robot is not.
2. Specify the optimization method with the operator set_calib_data. For the parameter
DataName=’optimization_method’, three options for DataValue are available, DataValue=’linear’,
DataValue=’nonlinear’ and DataValue=’stochastic’ (see paragraph ’Performing the actual hand-eye
calibration’).
3. Specify the poses of the calibration object
(a) For each observation of the calibration object, the 3D pose can be set directly using the operator
set_calib_data_observ_pose. This operator is intended to be used with generic 3D sensors
that observe the calibration object.
(b) The pose of the calibration object can also be estimated using camera images. The cali-
bration object has to be set in the calibration data model CalibDataID with the operator
set_calib_data_calib_object. Initial camera parameters have to be set with the operator
set_calib_data_cam_param. If a standard HALCON calibration plate is used, the operator
find_calib_object determines the pose of the calibration plate relative to the camera and saves it
in the calibration data model CalibDataID.
The operator calibrate_hand_eye for articulated (i.e., non-SCARA) robots in
this case calibrates the camera before performing the hand-eye calibration. If ’op-
timization_method’ is set to ’stochastic’, the hand-eye poses and camera parameters
are then refined simultaneously. If the provided camera parameters are already cal-
ibrated, the camera calibration can be switched off by calling set_calib_data
(CalibDataID,’camera’,’general’,’excluded_settings’,’params’).
In contrast, for SCARA robots calibrate_hand_eye always assumes that the provided camera
parameters are already calibrated. Therefore, in this case the internal camera calibration is never per-
formed automatically during hand-eye calibration. This is because the internal camera parameters cannot
be calibrated reliably without significantly tilting the calibration plate with respect to the camera. For
hand-eye calibration, the calibration plate is often approximately parallel to the image plane. Therefore,
for SCARA robots all camera poses are approximately parallel. Therefore, the camera must be calibrated
beforehand by using a different set of calibration images.
4. Specify the poses of the tool in robot base coordinates. For each pose of the calibration object in
the camera coordinate system, the corresponding pose of the tool in the robot base coordinate sys-
tem has to be set with the operator set_calib_data(CalibDataID,’tool’, PoseNumber,
’tool_in_base_pose’, ToolInBasePose).
• The position of the calibration object (moving camera: relative to the robot’s base; stationary camera: relative
to the robot’s tool) and the position of the camera (moving camera: relative to the robot’s tool; stationary
camera: relative to the robot’s base) must not be changed between the calibration poses.
• Even though a lower limit of three calibration object poses is theoretically possible, it is recommended to
acquire 10 or more poses, in which the pose of the camera or the robot hand are sufficiently different. If
’optimization_method’ is set to ’stochastic’, at least 25 poses are recommended. The estimation will be better
the more poses are used.
For articulated (i.e., non-SCARA) robots the amount of rotation between the calibration object poses is
essential and should be at least 30 degrees or better 60 degrees. The rotations between the poses must exhibit
at least two different axes of rotation. Very different orientations lead to more precise results of the hand-eye
HALCON 24.11.1.0
420 CHAPTER 6 CALIBRATION
calibration. For SCARA robots there is only one axis of rotation. The amount of rotation between the images
should also be large.
• For cameras, the internal camera parameters must be constant during and after the calibration. Note that
changes of the image size, the focal length, the aperture, or the focus cause a change of the internal camera
parameters.
• As mentioned, the camera must not be modified between the acquisition of the individual images. Please
make sure that the focus is sufficient for the expected changes of the camera to calibration plate distance.
Therefore, bright lighting conditions for the calibration plate are important, because then you can use smaller
apertures, which result in a larger depth of focus.
hom_mat3d_identity(HomMat3DIdent)
hom_mat3d_rotate(HomMat3DIdent, phi3, ’z’, 0, 0, 0, HomMat3DRotZ)
hom_mat3d_rotate(HomMat3DRotZ, phi2, ’y’, 0, 0, 0, HomMat3DRotZY)
hom_mat3d_rotate(HomMat3DRotZY, phi1, ’z’, 0, 0, 0,
HomMat3DRotZYZ)
hom_mat3d_translate(HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose(base_H_tool, RobPose)
Please note that the hand-eye calibration only works if the poses of the tool in robot base coordinates are specified
with high accuracy. Of the provided methods, ’optimization_method’ set to ’stochastic’ will yield the most robust
results with respect to noise on the poses of the tool in robot base coordinates. The estimation will be better the
more input poses are used.
Please note that this operator supports canceling timeouts and interrupts if ’optimization_method’ is set to ’stochas-
tic’.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. Errors (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Average residual error of the optimization.
Execution Information
• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_calib_data, set_calib_data_cam_param, set_calib_data_calib_object,
set_calib_data_observ_pose, find_calib_object, set_calib_data,
remove_calib_data, remove_calib_data_observ
Possible Successors
get_calib_data
References
K. Daniilidis: “Hand-Eye Calibration Using Dual Quaternions”; International Journal of Robotics Research, Vol.
18, No. 3, pp. 286-298; 1999.
M. Ulrich, C. Steger: “Hand-Eye Calibration of SCARA Robots Using Dual Quaternions”; Pattern Recognition
and Image Analysis, Vol. 26, No. 1, pp. 231-239; January 2016.
M. Ulrich, M. Hillemann: “Generic Hand–Eye Calibration of Uncertain Robots”; 2021 IEEE International Con-
ference on Robotics and Automation (ICRA), pp. 11060-11066; 2021.
Module
Calibration
HALCON 24.11.1.0
422 CHAPTER 6 CALIBRATION
cam
Moving camera: Hcal = cam Htool · tool Hbase · base Hcal
*
YH
H
6 H
CameraPose RobotPoses CalibrationPose
From the set of calibration images, the operator hand_eye_calibration determines the two transformations
at the ends of the chain, i.e., the pose of the robot tool in camera coordinates (cam Htool ,CameraPose) and the
pose of the calibration object in the robot base coordinate system (base Hcal ,CalibrationPose).
In contrast, the transformation in the middle of the chain, tool Hbase , is known but changes for each calibration
image, because it describes the pose of the robot moving the camera, or to be more exact its inverse pose (pose of
the base coordinate system in robot tool coordinates). You must specify the inverse robot poses in the calibration
images in the parameter RobotPoses.
Note that when calibrating SCARA robots it is not possible to determine the Z translation of CalibrationPose.
To eliminate this ambiguity the Z translation of CalibrationPose is internally set to 0.0 and the CameraPose
is calculated accordingly. It is necessary to determine the true translation in Z after the calibration (see
calibrate_hand_eye).
Stationary camera
In this configuration, the robot grasps the calibration object and moves it in front of the camera. Again, the
information extracted from a calibration image, i.e., the pose of the calibration object in camera coordinates (the
external camera parameters), are equal to a chain of poses or homogeneous transformation matrices, this time from
the calibration object via the robot’s tool to its base and finally to the camera:
cam
Stationary camera: Hcal = cam Hbase · base Htool · tool Hcal
* 6 YH
H
H
CameraPose RobotPoses CalibrationPose
Analogously to the configuration with a moving camera, the operator hand_eye_calibration determines
the two transformations at the ends of the chain, here the pose of the robot base coordinate system in cam-
era coordinates (cam Hbase ,CameraPose) and the pose of the calibration object relative to the robot tool
(tool Hcal ,CalibrationPose).
The transformation in the middle of the chain, base Htool , describes the pose of the robot moving the calibration
object, i.e., the pose of the tool relative to the base coordinate system. You must specify the robot poses in the
calibration images in the parameter RobotPoses.
Note that when calibrating SCARA robots it is not possible to determine the Z translation of CalibrationPose.
To eliminate this ambiguity the Z translation of CalibrationPose is internally set to 0.0 and the CameraPose
is calculated accordingly. It is necessary to determine the true translation in Z after the calibration (see
calibrate_hand_eye).
Additional information about the calibration process
The following sections discuss individual questions arising from the use of hand_eye_calibration. They
are intended to be a guideline for using the operator in an application, as well as to help understanding the operator.
How do I get 3D calibration points and their projections? 3D calibration points given in the world coordinate
system (X, Y, Z) and their associated projections in the image (Row, Col) form the basis of the hand-eye
calibration. In order to be able to perform a successful hand-eye calibration, you need at least three images of
the 3D calibration points that were obtained under different poses of the manipulator. In each image at least
four points must be available, in order to compute internally the pose transferring the calibration points from
their world coordinate system into the camera coordinate system.
In principle, you can use arbitrary known points for the calibration. However, it is usually most convenient
to use the standard calibration plate, e.g., the one that can be generated with gen_caltab. In this case,
you can use the operators find_caltab and find_marks_and_pose to extract the position of the
calibration plate and of the calibration marks and the operator caltab_points to read the 3D coordinates
of the calibration marks (see also the description of camera_calibration).
The parameter NumPoints specifies the number of 3D calibration points used for each pose of the manip-
ulator, i.e., for each image. With this, the 3D calibration points which are stored in a linearized fashion in
X, Y, Z, and their corresponding projections (Row, Col) can be associated with the corresponding pose of
the manipulator (RobotPoses). Note that in contrast to the operator camera_calibration the 3D
coordinates of the calibration points must be specified for each calibration image, not only once, and thus can
vary for each image of the sequence.
How do I acquire a suitable set of images? The following conditions, especially if using a standard calibration
plate, should be considered:
• The position of the calibration marks (moving camera: relative to the robot’s base; stationary camera:
relative to the robot’s tool) and the position of the camera (moving camera: relative to the robot’s tool;
stationary camera: relative to the robot’s base) must not be changed between the images.
HALCON 24.11.1.0
424 CHAPTER 6 CALIBRATION
• The internal camera parameters (CameraParam) must be constant and must be determined in a previ-
ous camera calibration step (see camera_calibration). Note that changes of the image size, the
focal length, the aperture, or the focus cause a change of the internal camera parameters.
• The theoretical lower limit of the number of image to acquire is three. Nevertheless, it is recommended
to have 10 or more images at hand, in which the position of the camera or the robot hand are sufficiently
different.
For articulated (i.e., non-SCARA) robots the amount of rotation between the images is essential and
should be at least 30 degrees or better 60 degrees. The rotations between the images must exhibit at least
two different axes of rotation. Very different orientations lead to precise calibration results. For SCARA
robots there is only one axis of rotation. The amount of rotation between the images should also be large.
• In each image, the calibration plate must be completely visible (including its border).
• Reflections or other disturbances should not impair the detection of the calibration plate and its calibra-
tion marks.
• If individual calibration marks instead of the standard calibration plate are used at least four marks must
be present in each image.
• In each image, the calibration plate should at least fill one quarter of the entire image for a precise
computation of the calibration to camera transformation, which is performed internally during hand-eye
calibration.
• As mentioned, the camera must not be modified between the acquisition of the individual images. Please
make sure that the focus is sufficient for the expected changes of the camera to calibration plate distance.
Therefore, bright lighting conditions for the calibration plate are important, because then you can use
smaller apertures, which result in a larger depth of focus.
How do I obtain the poses of the robot? In the parameter RobotPoses you must pass the poses of the robot in
the calibration images (moving camera: pose of the robot base in robot tool coordinates; stationary camera:
pose of the robot tool in robot base coordinates) in a linearized fashion. We recommend to create the robot
poses in a separate program and save in files using write_pose. In the calibration program you can then
read and accumulate them in a tuple as shown in the example program below. In addition, we recommend to
save the pose of the robot tool in robot base coordinates independent of the hand-eye configuration. When
using a moving camera, you then invert the read poses before accumulating them. This is also shown in the
example program.
Via the Cartesian interface of the robot, you can typically obtain the pose of the tool in base coordinates in
a notation that corresponds to the pose representations with the codes 0 or 2 (OrderOfRotation = ’gba’
or ’abg’, see create_pose). In this case, you can directly use the pose values obtained from the robot as
input for create_pose.
If the Cartesian interface of your robot describes the orientation in a different way, e.g., with the representation
ZYZ (Rz (ϕ1) · Ry (ϕ2) · Rz (ϕ3)), you can create the corresponding homogeneous transformation matrix
step by step using the operators hom_mat3d_rotate and hom_mat3d_translate and then convert
the matrix into a pose using hom_mat3d_to_pose. The following example code creates a pose from the
ZYZ representation described above:
hom_mat3d_identity(HomMat3DIdent)
hom_mat3d_rotate(HomMat3DIdent, phi3, ’z’, 0, 0, 0,
HomMat3DRotZ)
hom_mat3d_rotate(HomMat3DRotZ, phi2, ’y’, 0, 0, 0,
HomMat3DRotZY)
hom_mat3d_rotate(HomMat3DRotZY, phi1, ’z’, 0, 0, 0,
HomMat3DRotZYZ)
hom_mat3d_translate(HomMat3DRotZYZ, Tx, Ty, Tz, base_H_tool)
hom_mat3d_to_pose(base_H_tool, RobPose)
Please note that the hand-eye calibration only works if the robot poses RobotPoses are specified with high
accuracy!
What is the order of the individual parameters? The length of the tuple NumPoints corresponds to the num-
ber of different positions of the manipulator and thus to the number of calibration images. The parameter
NumPoints determines the number of calibration points used in the individual positions. If the standard
calibration plate is used, this means 49 points per position (image). If, for example, 15 images were acquired,
NumPoints is a tuple of length 15, where all elements of the tuple have the value 49.
The number of images in the sequence, which is determined by the length of NumPoints, must also be taken
into account for the tuples of the 3D calibration points and the extracted 2D marks, respectively. Hence,
for 15 calibration images with 49 calibration points each, the tuples X, Y, Z, Row, and Col must contain
15 · 49 = 735 values each. These tuples are ordered according to the image the respective points lie in, i.e.,
the first 49 values correspond to the 49 calibration points in the first image. The order of the 3D calibration
points and the extracted 2D calibration points must be the same in each image.
The length of the tuple RobotPoses also depends on the number of calibration images. If, for example, 15
images and therefore 15 poses are used, the length of the tuple RobotPoses is 15 · 7 = 105 (15 times 7
pose parameters). The first seven parameters thus determine the pose of the manipulator in the first image,
and so on.
Algorithm and output parameters The parameter Method determines the type of algorithm used for the hand-
eye calibration: With ’linear’ a linear algorithm is chosen, which is fast but in many practical situations not
accurate enough. ’nonlinear’ selects a non-linear algorithm, which results in the most accurately calibrated
poses and which is the method of choice.
For the calibration of SCARA robots the parameter Method must be set to ’scara_linear’ or
’scara_nonlinear’, respectively. While the arm of an articulated robot has three rotary joints typically cov-
ering 6 degrees of freedom (3 translations and 3 rotations), SCARA robots have two parallel rotary joints
and one parallel prismatic joint covering only 4 degrees of freedom (3 translations and 1 rotation). Loosely
speaking, an articulated robot is able to tilt its end effector while a SCARA robot is not.
The parameter QualityType switches between different possibilities for assessing the quality of the cali-
bration result returned in Quality. ’error_pose’ stands for the pose error of the complete chain of transfor-
mations. To be more precise, a tuple with four elements is returned, where the first element is the root-mean-
square error of the translational part, the second element is the root-mean-square error of the rotational part,
the third element is the maximum translational error and the fourth element is the maximum rotational error.
With ’standard_deviation’, a tuple with 12 elements containing the standard deviations of the two poses is
returned: The first six elements refer to the camera pose and the others to the pose of the calibration points.
With ’covariance’, the full 12x12 covariance matrix of both poses is returned. Like poses, the standard devi-
ations and the covariances are specified in the units [m] and [°]. Note that selecting ’linear’ or ’scara_linear’
for the parameter Method enables only the output of the pose error (’error_pose’).
Parameters
. X (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the x coordinates of the calibration points (in the order of the images).
. Y (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the y coordinates of the calibration points (in the order of the images).
. Z (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the z coordinates of the calibration points (in the order of the images).
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all row coordinates of the calibration points (in the order of the images).
. Col (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Linear list containing all the column coordinates of the calibration points (in the order of the images).
. NumPoints (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of the calibration points for each image.
. RobotPoses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose-array ; real / integer
Known 3D pose of the robot for each image (moving camera: robot base in robot tool coordinates; stationary
camera: robot tool in robot base coordinates).
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method of hand-eye calibration.
Default: ’nonlinear’
List of values: Method ∈ {’linear’, ’nonlinear’, ’scara_linear’, ’scara_nonlinear’}
. QualityType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Type of quality assessment.
Default: ’error_pose’
List of values: QualityType ∈ {’error_pose’, ’standard_deviation’, ’covariance’}
HALCON 24.11.1.0
426 CHAPTER 6 CALIBRATION
* Note that, in order to use this code snippet, you must provide
* the camera parameters, the calibration plate description file,
* the calibration images, and the robot poses.
read_cam_par('campar.dat', CameraParam)
CalDescr := 'caltab.descr'
caltab_points(CalDescr, X, Y, Z)
* Process all calibration images.
for i := 0 to NumImages-1 by 1
read_image(Image, 'calib_'+i$'02d')
* Find marks on the calibration plate in every image.
find_caltab(Image, CalPlate, CalDescr, 3, 150, 5)
find_marks_and_pose(Image, CalPlate, CalDescr, CameraParam, 128, 10, 18, \
0.9, 15, 100, RCoordTmp, CCoordTmp, StartPose)
* Accumulate 2D and 3D coordinates of the marks.
RCoord := [RCoord, RCoordTmp]
CCoord := [CCoord, CCoordTmp]
XCoord := [XCoord, X]
YCoord := [YCoord, Y]
ZCoord := [ZCoord, Z]
NumMarker := [NumMarker, |RCoordTmp|]
* Read pose of the robot tool in robot base coordinates.
read_pose('robpose_'+i$'02d'+'.dat', RobPose)
* Moving camera? Invert pose.
if (IsMovingCameraConfig == 'true')
pose_to_hom_mat3d(RobPose, base_H_tool)
hom_mat3d_invert(base_H_tool, tool_H_base)
hom_mat3d_to_pose(tool_H_base, RobPose)
endif
* Accumulate robot poses.
RobotPoses := [RobotPoses, RobPose]
endfor
*
* Perform hand-eye calibration.
*
hand_eye_calibration(XCoord, YCoord, ZCoord, RCoord, CCoord, NumMarker, \
RobotPoses, CameraParam, 'nonlinear', 'error_pose', \
CameraPose, CalibrationPose, Error)
Result
The operator hand_eye_calibration returns the value 2 (H_MSG_TRUE) if the given parameters are correct.
Otherwise, an exception will be raised.
Execution Information
Possible Predecessors
find_marks_and_pose, camera_calibration, calibrate_cameras
Possible Successors
write_pose, convert_pose_type, pose_to_hom_mat3d, disp_caltab, sim_caltab
Alternatives
calibrate_hand_eye
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab,
calibrate_hand_eye
References
K. Daniilidis: “Hand-Eye Calibration Using Dual Quaternions”; International Journal of Robotics Research, Vol.
18, No. 3, pp. 286-298; 1999.
M. Ulrich, C. Steger: “Hand-Eye Calibration of SCARA Robots Using Dual Quaternions”; Pattern Recognition
and Image Analysis, Vol. 26, No. 1, pp. 231-239; January 2016.
Module
Calibration
Note that, since the model CalibDataID uses a general sensor and no calibration object (i.e., the model was
created by create_calib_data with NumCameras=0 and NumCalibObjects=0), both CameraIdx and
CalibObjIdx must be set to 0. If the model uses a camera and an calibration object (i.e., NumCameras=1
and NumCalibObjects=1), then find_calib_object or set_calib_data_observ_points must
be used.
The observation pose data can be accessed later by calling get_calib_data_observ_pose using the same
values for the arguments CameraIdx, CalibObjIdx, and CalibObjPoseIdx.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2}
HALCON 24.11.1.0
428 CHAPTER 6 CALIBRATION
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_marks_and_pose, set_calib_data_cam_param, set_calib_data_calib_object
Possible Successors
set_calib_data, calibrate_cameras
Alternatives
find_calib_object
Module
Calibration
The advantage of representing the line of sight as two points is that it is easier to transform the line in 3D. To do
so, all that is necessary is to apply the operator affine_trans_point_3d to the two points.
Parameters
. Row (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Row coordinate of the pixel.
. Column (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Column coordinate of the pixel.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. PX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinate of the first point on the line of sight in the camera coordinate system
. PY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinate of the first point on the line of sight in the camera coordinate system
. PZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinate of the first point on the line of sight in the camera coordinate system
. QX (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
X coordinate of the second point on the line of sight in the camera coordinate system
. QY (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Y coordinate of the second point on the line of sight in the camera coordinate system
. QZ (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Z coordinate of the second point on the line of sight in the camera coordinate system
Example
Result
get_line_of_sight returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
is raised.
Execution Information
6.6 Monocular
HALCON 24.11.1.0
430 CHAPTER 6 CALIBRATION
camera_calibration performs the calibration of a single camera. For this, known 3D model points (with
coordinates NX, NY, NZ) are projected into the image and the sum of the squared distances between the projected
3D-coordinates and their corresponding image point coordinates (NRow, NCol) is minimized.
As initial values for the minimization process the external (NStartPose) and internal (StartCamParam) cam-
era parameters are used. Thereby NStartPose is an ordered tuple with all initial values for the external camera
parameters given in the form ccs Pwcs , where ccs denotes the camera coordinate system and wcs the world co-
ordinate system (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). Individual
camera parameters can be explicitly included or excluded from the minimization with EstimateParams. For a
detailed description of the available camera models, the different sets of internal camera parameters, and general
requirements for the setup, see Calibration.
For a successful calibration, at least one calibration object with accurately known metric properties is needed, e.g.,
a HALCON calibration plate. Before calling camera_calibration, take a series of images of the calibration
object in different orientations and make sure that the whole field of view or measurement volume is covered. The
success of the calibration highly depends on the quality of the calibration object and the images. So you might
want to exercise special diligence during the acquisition of the calibration images. See the section “How to take a
set of suitable images?” in Calibration for further details.
After a successful calibration, camera_calibration returns the optimized internal (CameraParam) and
external (NFinalPose ccs Pwcs ) camera parameters of the camera. Additionally, the root mean square error
(RMSE) of the back projection of the optimization is returned in Errors (in pixels). This error gives a general
indication whether the optimization was successful.
Preparation of the calibration process
How to extract the calibration marks in the images? If a HALCON calibration plate is used, you can use the
operator find_calib_object to determine the coordinates of the calibration marks in each image and
to compute a rough estimate for the external camera parameters. Using HALCON calibration plates with
rectangularly arranged marks (see gen_caltab), a combination of the two operators find_caltab and
find_marks_and_pose will have the same effect. In both cases, the hereby obtained values can directly
be used as initial values for the external camera parameters (NStartPose).
Obviously, images in which the segmentation of the calibration plate (find_caltab) has failed
or the calibration marks have not been determined successfully by find_marks_and_pose or
find_calib_object should not be used.
How do you get the required initial values for the calibration? If you use a HALCON calibration plate, the in-
put parameters NX, NY, and NZ are stored in the description file of the calibration plate. You can easily
access them by calling the operator caltab_points. Initial values for the internal camera parameters
(StartCamParam) can be obtained from the specifications of the used camera. Further information can
be found in Calibration. Initial values for the poses of the calibration plate and the coordinates of the cal-
ibration marks NRow and NCol can be calculated using the operator find_calib_object. The tuple
NStartPose is set by the concatenation of all these poses.
Which camera parameters are estimated? The input parameter EstimateParams is used to select which
camera parameters to estimate. Usually, this parameter is set to ’all’, i.e., all 6 external camera param-
eters (translation and rotation) and all internal camera parameters are determined. If the internal camera
parameters already have been determined (e.g., by a previous call to camera_calibration), it is often
desired to only determine the pose of the world coordinate system in camera coordinates (i.e., the external
camera parameters). In this case, EstimateParams can be set to ’pose’. This has the same effect as
EstimateParams = [’alpha’,’beta’,’gamma’,’transx’,’transy’,’transz’]. Otherwise, EstimateParams
contains a tuple of strings that indicates the combination of parameters to estimate. In addition, parameters
can be excluded from estimation by using the prefix ~. For example, the values [’pose’,’~transx’] have the
same effect as [’alpha’,’beta’,’gamma’,’transy’,’transz’]. As a different example, [’all’,’~focus’] determines
all internal and external parameters except the focus. The prefix ~ can be used with all parameter values
except ’all’.
Which limitations exist for the determination of the camera parameters? For additional information about
general limitations when determining camera parameters, please see the section “Further Limitations Re-
lated to Specific Camera Types” in the chapter Calibration.
What is the order within the individual parameters? The length of the tuple NStartPose depends on the
number of calibration images, e.g., using 15 images leads to a length of the tuple NStartPose equal to
15 · 7 = 105 (15 times the 7 external camera parameters). The first 7 values correspond to the pose of the
calibration plate in the first image, the next 7 values to the pose in the second image, etc.
This fixed number of calibration images must be considered within the tuples with the coordinates of the 3D
model marks and the extracted 2D marks. If 15 images are used, the length of the tuples NRow and NCol
is 15 times the length of the tuples with the coordinates of the 3D model marks (NX, NY, and NZ). If every
image consists 49 marks, the length of the tuples NRow and NCol is 15 · 49 = 735, while the length of the
tuples NX, NY, and NZ is 49. The order of the values in NRow and NCol is “image after image”, i.e., using
49 marks the first 3D model point corresponds to the 1st, 50th, 99th, 148th, 197th, 246th, etc. extracted 2D
mark.
What is the meaning of the output parameters? If the camera calibration process has finished successfully, the
output parameters CameraParam and NFinalPose contain the adjusted values for the internal and ex-
ternal camera parameters. The length of the tuple NFinalPose corresponds to the length of the tuple
NStartPose.
The representation types of NFinalPose correspond to the representation type of the first tuple of
NStartPose (see create_pose). You can convert the representation type by convert_pose_type.
As an additional parameter, the root mean square error (RMSE) (Errors) of the back projection of the
optimization is returned. This parameter reflects the accuracy of the calibration. The error value (root mean
square error of the position) is measured in pixels. If only a single camera is calibrated, an Error in the order
of 0.1 pixel (the typical detection error by extraction of the coordinates of the projected calibration markers) is
an indication that the optimization fits the observation data well. If Errors strongly differs from 0.1 pixels,
the calibration did not perform well. Reasons for this might be, e.g., a poor image quality, an insufficient
number of calibration images, or an inaccurate calibration plate.
Do I have to use a planar calibration object? No. The operator camera_calibration is designed in a way
that the input tuples NX, NY, NZ, NRow, and NCol can contain any 3D/2D correspondences. The order of the
single parameters is explained in the paragraph “What is the order within the individual parameters?”.
Thus, it makes no difference how the required 3D model marks and the corresponding 2D marks are de-
termined. On the one hand, it is possible to use a 3D calibration object, on the other hand, you also
can use any characteristic points (e.g., natural landmarks) with known position in the world. By setting
EstimateParams to ’pose’, it is thus possible to compute the pose of an object in camera coordinates!
For this, at least three 3D/2D-correspondences are necessary as input. NStartPose can, e.g., be generated
directly as shown in the program example for create_pose.
Attention
The minimization process of the calibration depends on the initial values of the internal (StartCamParam) and
external (NStartPose) camera parameters. The computed average errors Errors give an impression of the
accuracy of the calibration. The errors (deviations in x- and y-coordinates) are measured in pixels.
For line scan cameras, it is possible to set the start value for the internal camera parameter Sy to the value
0.0. In this case, it is not possible to determine the position of the principal point in y-direction. Therefore,
EstimateParams must contain the term ’~cy’. The effective distance of the principle point from the sensor line
is then always pv = Sy · Cy = 0.0. Further information can be found in the section “Further Limitations Related to
Specific Camera Types” of Calibration.
Parameters
. NX (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered tuple with all x coordinates of the calibration marks (in meters).
. NY (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered tuple with all y coordinates of the calibration marks (in meters).
Number of elements: NY == NX
. NZ (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .number-array ; real / integer
Ordered tuple with all z coordinates of the calibration marks (in meters).
Number of elements: NZ == NX
. NRow (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered tuple with all row coordinates of the extracted calibration marks (in pixels).
. NCol (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Ordered tuple with all column coordinates of the extracted calibration marks (in pixels).
Number of elements: NCol == NRow
. StartCamParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial values for the internal camera parameters.
HALCON 24.11.1.0
432 CHAPTER 6 CALIBRATION
Result
camera_calibration returns 2 (H_MSG_TRUE) if all parameter values are correct and the desired camera
parameters have been determined by the minimization algorithm. If necessary, an exception is raised.
Execution Information
Possible Predecessors
find_marks_and_pose, caltab_points, read_cam_par
Possible Successors
write_pose, pose_to_hom_mat3d, disp_caltab, sim_caltab
Alternatives
calibrate_cameras
See also
find_caltab, find_marks_and_pose, disp_caltab, sim_caltab, write_cam_par,
read_cam_par, create_pose, convert_pose_type, write_pose, read_pose,
pose_to_hom_mat3d, hom_mat3d_to_pose, caltab_points, gen_caltab,
calibrate_cameras
Module
Calibration
6.7 Multi-View
• The number of cameras in the setup and the number of used calibration objects can be set when calling
create_calib_data.
• When specifying the camera type with set_calib_data_cam_param, note that only cameras of the
same type (i.e., area scan or line scan) can be calibrated in a single setup.
• Configure the calibration process, e.g., specify the reference camera, using set_calib_data. You can
also specify parameters for the complete setup or just configure parameters of individual cameras as well as
calibration object poses in the setup.
1. Building a chain of observation poses: In the first step, the operator calibrate_cameras tries to build
a valid chain of observation poses, that connects all cameras and calibration object poses to the reference
camera. Depending on the setup, the conditions for a valid chain of poses differ. For specific information
see the respective paragraphs below.
If there is a camera that cannot be reached (i.e., it is not observing any calibration object pose that can
be connected in the chain), the calibration process is terminated with an error. Otherwise, the algorithm
initializes all calibration items’ poses by going down this chain.
2. First optimization: In this step, calibrate_cameras performs the actual optimization for all optimiza-
tion parameters that were not explicitly excluded from the calibration.
3. Second optimization: Based on the so-far calibrated cameras, the algorithm corrects all observations that
contain mark contour information (see find_calib_object). Then, the calibration setup is optimized
anew for the corrections to take effect. If no contour information was available, this step is skipped.
HALCON 24.11.1.0
434 CHAPTER 6 CALIBRATION
4. Compute quality of parameter estimation: In the last step, calibrate_cameras computes the stan-
dard deviations and the covariances of the calibrated internal camera parameters.
The following paragraphs give further information about the conditions specific to the camera setups.
Projective area scan cameras For a setup with projective area scan cameras, the calibration is performed in the
four steps listed above. The algorithm tries to build a chain of observation poses that connects all cameras
and calibration object poses to the reference camera like in the diagram below.
(1) (2)
(1) All cameras can be connected by a chain of observation poses. (2) The leftmost camera is isolated,
because the left calibration plate cannot be seen by any other camera.
• ’area_scan_division’
• ’area_scan_polynomial’
• ’area_scan_tilt_division’
• ’area_scan_tilt_polynomial’
• ’area_scan_tilt_image_side_telecentric_division’
• ’area_scan_tilt_image_side_telecentric_polynomial’
• ’area_scan_hypercentric_division’
• ’area_scan_hypercentric_polynomial’
Telecentric area scan cameras For a setup with telecentric area scan cameras, similar to projective area scan
cameras, the same four steps that are listed above are executed. In the first step (building a chain of observa-
tion poses that connects all cameras and calibration objects), additional conditions must hold. Since the pose
of an object can only be determined up to a translation along the optical axis, each calibration object must be
observed by at least two cameras to determine its relative location. Otherwise, its pose is excluded from the
calibration. Also, since a planar calibration object appears the same from two different observation angles,
the relative pose of the cameras among each other cannot be determined unambiguously. Therefore, there
are always two valid alternative relative poses. Both alternatives result in a consistent camera setup which
can be used for measuring. Since the ambiguity cannot be resolved, the first of the alternatives is returned.
Note that, if the returned pose is not the real pose but the alternative one, then this will result in a mirrored
reconstruction.
Possible telecentric area scan cameras are:
• ’area_scan_telecentric_division’
• ’area_scan_telecentric_polynomial’
• ’area_scan_tilt_bilateral_telecentric_division’
• ’area_scan_tilt_bilateral_telecentric_polynomial’
• ’area_scan_tilt_object_side_telecentric_division’
• ’area_scan_tilt_object_side_telecentric_polynomial’
Projective and telecentric area scan cameras For a mixed setup with projective and telecentric area scan cam-
eras, the algorithm performs the same four steps as enumerated above. Possible ambiguities during the first
step (building a chain of observation poses that connects all cameras and calibration objects), as described
above for the setup with telecentric cameras, can be resolved as long as there exists a chain of observation
poses consisting of all perspective cameras and a sufficient number of calibration objects. Here, sufficient
number means that each telecentric camera observes at least two calibration objects of this chain.
Line scan cameras Setups with telecentric line scan cameras (’line_scan_telecentric’) behave identically to se-
tups with telecentric area scan cameras and the same restrictions and ambiguities that are described above
apply. For this type of setup, two possible configurations can be distinguished. In the first configuration,
all cameras are mounted rigidly and stationary and the object is moved linearly in front of the cameras.
Alternatively, all cameras are mounted rigidly with respect to each other and are moved across the object by
the same linear actuator. In both cases, all cameras share a common motion vector, which is modeled in the
camera coordinate system of the reference camera and is transformed to the camera coordinate systems of all
other cameras by the rotation part of the respective camera’s pose. This configuration is assumed by default.
In the second configuration, the cameras are moved by independent linear actuators in different directions.
In this case, each camera has its own independent motion vector. The type of configuration can be selected
with set_calib_data.
Note that two different stereo setups are common for telecentric line scan cameras. For both setups, a linear,
constant motion is assumed for the observed object or the camera system respectively.
• For along-track setups one camera is placed in front, looking in backwards direction, while the second
camera is mounted behind, looking forwards, both at an suitable angle in respect to the motion vector.
• The cameras in an across-track setup are all directed perpendicular to the motion vector, while the
viewing planes are approximately coplanar. Therefore, the depth of field is rather limited. Precise
measurements are only possible in areas where the depth of field of the individual cameras overlap.
HALCON 24.11.1.0
436 CHAPTER 6 CALIBRATION
(1) (2)
Stereo setups for telecentric line scan cameras: (1) Along-track setup and (2) Across-track setup.
For setups with projective line scan cameras (’line_scan’), the following restriction exists: only one camera
can be calibrated and only one calibration object per setup can be used.
Finally, for calibration plates with rectangularly arranged marks (see gen_caltab) all observations must contain
the projection coordinates of all calibration marks of the calibration object. For calibration plates with hexagonally
arranged marks (see create_caltab) this restriction is not applied. You can find further information about cal-
ibration plates and the acquisition of calibration images in the section “Additional information about the calibration
process” within the chapter Calibration.
Checking the Success of the Calibration
If more than one camera is calibrated simultaneously, the value of Error is more difficult to judge. As a rule
of thumb, Error should be as small as possible and at least smaller than 1.0, thus indicating that a subpixel
precise evaluation of the data is possible with the calibrated parameters. This value might be difficult to reach in
particular configurations. For further analysis of the quality of the calibration, refer to the standard deviations and
covariances of the estimated parameters.
Getting the Calibration Results
The results of the calibration, i.e., internal camera parameters, camera poses (external camera parameters), calibra-
tion objects poses etc., can be queried with get_calib_data.
Note that the poses of telecentric cameras can only be determined up to a displacement along the z-axis of the
coordinate system of the respective camera (perpendicular to the image plane). Therefore, all camera poses are
moved along this axis until they all lie on a common sphere. The center of the sphere is defined by the pose of the
first calibration object. The radius of the sphere depends on the calibration setup. If projective and telecentric area
scan cameras are calibrated, the radius is the maximum over all distances from the perspective cameras to the first
calibration object. Otherwise, if only telecentric area scan cameras are considered, the radius is equal to 1 m.
Further Information
Learn about the calibration of multi-camera setups and many other topics in interactive online courses at our
MVTec Academy .
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Back projection root mean square error (RMSE) of the optimization.
Execution Information
clear_calib_data ( : : CalibDataID : )
HALCON 24.11.1.0
438 CHAPTER 6 CALIBRATION
clear_camera_setup_model ( : : CameraSetupModelID : )
In the parameter CalibSetup, you specify the calibration setup type. Currently, five types are supported. A
model of the type ’calibration_object’ is used to calibrate the internal camera parameters and the camera poses of
one or more cameras based on the metric information extracted from observations of calibration objects.
A model of type ’hand_eye_moving_cam’, ’hand_eye_stationary_cam’, ’hand_eye_scara_moving_cam’, or
’hand_eye_scara_stationary_cam’ is used to perform a hand-eye calibration based on observations of a calibration
object and corresponding poses of a robot tool in the robot base coordinate system. The latter four model types
on the one hand distinguish whether the camera or the calibration object is moved by the robot and on the other
hand distinguish whether an articulated robot or a SCARA robot is calibrated. The arm of an articulated robot has
three rotary joints typically covering 6 degrees of freedom (3 translations and 3 rotations). SCARA robots have
two parallel rotary joints and one parallel prismatic joint covering only 4 degrees of freedom (3 translations and 1
rotation). Loosely speaking, an articulated robot is able to tilt its end effector while a SCARA robot is not.
NumCameras specifies the number of cameras that are calibrated simultaneously in the setup.
NumCalibObjects specifies the number of calibration objects observed by the cameras. Please note that for
camera calibrations with line scan cameras with perspective lenses only a single calibration object is allowed
(NumCalibObjects=1). For hand-eye calibrations, only two setups are currently supported: either one area
scan projective camera and one calibration object (NumCameras=1, NumCalibObjects=1) or a general sensor
with no calibration object (NumCameras=0, NumCalibObjects=0). Attention: The four hand-eye calibration
models do not support telecentric cameras.
CalibDataID returns a handle of the new calibration data model. You pass this handle to other operators to col-
lect the description of the camera setup, the calibration settings, and the calibration data. For camera calibrations,
you pass it to calibrate_cameras, which performs the actual camera calibration and stores the calibration
results in the calibration data model. For a detailed description of the preparation process, please refer to the chap-
ter Calibration. For hand-eye calibrations, you pass it to calibrate_hand_eye, which performs the actual
hand-eye calibration and stores the calibration results in the calibration data model. For a detailed description of
the preparation process, please refer to the operator calibrate_hand_eye.
Parameters
. CalibSetup (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the calibration setup.
Default: ’calibration_object’
List of values: CalibSetup ∈ {’calibration_object’, ’hand_eye_moving_cam’,
’hand_eye_stationary_cam’, ’hand_eye_scara_moving_cam’, ’hand_eye_scara_stationary_cam’}
. NumCameras (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of cameras in the calibration setup.
Default: 1
Restriction: NumCameras >= 0
. NumCalibObjects (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of calibration objects.
Default: 1
Restriction: NumCalibObjects >= 0
. CalibDataID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of the created calibration data model.
Execution Information
HALCON 24.11.1.0
440 CHAPTER 6 CALIBRATION
Using set_camera_setup_param, you can change the coordinate system in which the cameras are repre-
sented: You can either select a camera and convert all camera poses to be relative to this camera or you can apply
a general coordinate transformation, which moves the setup’s coordinate system into an arbitrary pose. Changing
the coordinate system of the camera setup is particularly useful in cases, where, e.g., you want to represent the
cameras in the coordinate system of an object being observed by the cameras. This concept is further demonstrated
in the example below.
The internal parameters and pose of a camera are set or modified by set_camera_setup_cam_param. Fur-
ther camera parameters and general setup parameters can be set by set_camera_setup_param as well. All
parameters can be read back by get_camera_setup_param.
A camera setup model can be saved into a file by write_camera_setup_model and read back by
read_camera_setup_model.
Parameters
. NumCameras (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of cameras in the setup.
Default: 2
Suggested values: NumCameras ∈ {1, 2, 3, 4}
Restriction: NumCameras >= 1
. CameraSetupModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
Example
Execution Information
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_calib_data
Module
Calibration
deserialize_camera_setup_model (
: : SerializedItemHandle : CameraSetupModelID )
HALCON 24.11.1.0
442 CHAPTER 6 CALIBRATION
SerializedItemHandle. The deserialized values are stored in an automatically created camera setup model
with the handle CameraSetupModelID.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. CameraSetupModelID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
Result
If the parameters are valid, the operator deserialize_camera_setup_model returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
Before we describe the individual data you can query in detail, we provide you with an overview on which data
is available after the individual steps of the calibration processes. When calibrating cameras or a hand-eye sys-
tem, several operators are called that step by step fill the calibration data model with content. In the following,
for each operator a table lists the data that is added to the model. Additionally, you find information about the
combinations of the values for ItemType, ItemIdx, and DataName that are needed to query the information
with get_calib_data. For the different indices that are used within the tables the following abbreviations (or
potential variable names) are used:
Detailed descriptions of the data that can be queried can then be found in the specific sections that handle the
different categories of data individually.
To get detailed information about the calibration process of your camera setup see the chapter Calibration.
Content of Calibration Data Model When Calibrating Cameras
For each operator that extends the calibration model, a table is provided to give an overview on the respective data:
• create_calib_data:
For standard HALCON calibration plates, further calibration plate specific information is added to the model,
which is not accessible with get_calib_data but can be obtained directly from the corresponding calibration
plate description files instead (for details about the description files see create_caltab for a calibration plate
with hexagonally arranged marks and gen_caltab for a calibration plate with rectangularly arranged marks).
HALCON 24.11.1.0
444 CHAPTER 6 CALIBRATION
• set_calib_data:
• calibrate_cameras:
• create_calib_data:
See the section ’Content of Calibration Data Model When Calibrating Cameras’.
• set_calib_data:
HALCON 24.11.1.0
446 CHAPTER 6 CALIBRATION
• calibrate_hand_eye:
Moving camera scenario:
The following sections describe the parameters for the specific categories of data in more detail.
Model-Related Data
HALCON 24.11.1.0
448 CHAPTER 6 CALIBRATION
’camera_setup_model’: A handle to a camera setup model containing the poses and the internal parameters
for the calibrated cameras from the current calibration setup.
’camera_calib_error’: The root mean square error (RMSE) of the back projection of the optimization of the
camera system. Typically, this error is queried after a hand-eye calibration (calibrate_hand_eye)
was performed, where internally the camera system is calibrated without returning the error of the cam-
era calibration. The returned error is identical to the error returned by calibrate_cameras, except
for ’optimization_method’ set to ’stochastic’, which refines hand-eye poses and camera parameters si-
multaneously for articulated robots.
’hand_eye_calib_error’: After a successful hand-eye calibration, the pose error of the complete chain of
transformations is returned. To be more precise, a tuple with four elements is returned, where the first
element is the root-mean-square error of the translational part, the second element is the root-mean-
square error of the rotational part, the third element is the maximum translational error and the fourth
element is the maximum rotational error. The returned errors are identical to the errors returned by
calibrate_hand_eye.
’optimization_method’: Optimization method that was set for the hand-eye calibration (see
set_calib_data).
’camera_calib_error_corrected_tool’: The root mean square error (RMSE) of the back projection of the
calibration mark centers into camera images, via the pose chain using corrected tool poses. By con-
trast, ’camera_calib_error’ uses the direct back projection of ’calib_obj_pose’. This parameter is only
available if ’optimization_method’ is set to ’stochastic’.
’hand_eye_calib_error_corrected_tool’: After a successful hand_eye calibration, the pose error of
the complete chain of transformations using corrected tool poses is returned. By contrast,
’hand_eye_calib_error’ uses the input tool poses. This parameter is only available if ’optimiza-
tion_method’ is set to ’stochastic’.
The parameters ’reference_camera’, ’common_motion_vector’, and ’optimization_method’ can be set with
set_calib_data. The other parameters are set during the model creation or are a result of the calibration
process and cannot be modified.
Camera-Related Data
ItemType=’camera’: ItemIdx determines, if data is queried for all cameras in general or for a specific camera.
With ItemIdx=’general’, the default value of a parameter for all cameras is returned. In contrast, if you
pass a valid camera index instead, i.e., a number between 0 and NumCameras-1 (NumCameras is specified
during model creation with create_calib_data), only the parameter value of the specified camera is
returned.
By selecting the following parameters in DataName, you can query which camera parameters are (or have
been) optimized during the calibration performed by calibrate_cameras:
’calib_settings’: List of the camera parameters that are marked for calibration.
’excluded_settings’: List of camera parameters that are excluded from the calibration.
These parameters can be modified by a corresponding call to set_calib_data.
The following parameters can only be queried for a specific camera, i.e., you must pass a valid camera index
in ItemIdx:
’type’: The camera type that was set with set_calib_data_cam_param.
’init_params’: Initial internal camera parameters (set with set_calib_data_cam_param).
’params’: Optimized internal camera parameters.
’params_deviations’: Standard deviations of the optimized camera parameters, as estimated at the end of the
camera calibration. Note that if the tuple returned for ’params’ contains n elements, the tuple returned
for ’params_deviations’ contains (n − 1) elements since the camera parameter tuple contains the camera
type in the first element of the tuple, whereas the tuple returned for ’params_deviations’ does not contain
the camera type.
’params_covariances’: Covariance matrix of the optimized camera parameters, as estimated at the end of the
camera calibration. Note that if the tuple returned for ’params’ contains n elements, the tuple returned
for ’params_covariances’ contains (n − 1) × (n − 1) elements since the camera parameter tuple contains
the camera type in the first element of the tuple, whereas the tuple returned for ’params_covariances’
does not contain the camera type.
’params_labels’: A convenience list of labels for the entries returned by ’params’. This list is camera-type
specific. Note that this list contains the label ’camera_type’ in its first position. If the first element of
the tuple is removed, the list refers to the labels of ’params_deviations’ and the labels of the rows and
columns of ’params_covariances’.
’init_pose’: Initial camera pose, relative to the current reference camera. It is computed internally based on
observation poses during the calibration process (see Calibration).
’pose’: Optimized camera pose, relative to the current reference camera. If one single telecentric camera is
calibrated, the translation along the z-axis is set to the value 0.0. If more than one telecentric camera is
calibrated, the camera poses are moved in direction of their z-axis until they all lie on a sphere centered
at the first observed calibration plate. The radius of the sphere corresponds to the longest distance of a
camera to the first observed calibration plate. If this calculated distance is smaller than 1 m, the radius is
set to 1 m.
’pose_labels’: A convenience list of labels for the entries returned by ’pose’.
The calibrated camera parameters (’params’ and ’pose’) can be queried only after a successful execution
of calibrate_cameras. The initial internal camera parameters ’init_params’ can be queried after a
successful call to set_calib_data_cam_param.
ItemType=’calib_obj’: ItemIdx must be set to a valid calibration object index (number between 0
and NumCalibObjects-1). NumCalibObjects is specified during the model creation with
create_calib_data.
The following parameters can be queried with DataName and are returned in DataValue:
’num_marks’: Number of calibration marks of the calibration object.
’x’, ’y’, ’z’: Coordinates of the calibration marks relative to the calibration object coordinate system.
These parameters can be modified with set_calib_data_calib_object.
ItemType=’calib_obj_pose’: ItemIdx determines, if data is queried for all calibration object poses in general
or for a specific calibration object pose. With ItemIdx=’general’, the default value of a parameter for all
calibration object poses is returned. In contrast, if you pass a valid calibration object index instead, i.e., a
tuple containing a valid index pair [CalibObjIdx, CalibObjPoseIdx], only the parameter value of
the specified calibration object pose is returned.
By selecting the following parameters in DataName, you can query which calibration object pose parameters
are (or have been) optimized during the calibration performed by calibrate_cameras:
’calib_settings’: List of calibration object pose parameters marked for calibration.
’excluded_settings’: List of calibration object pose parameters excluded from calibration.
These parameters can be set with set_calib_data.
The following parameters can only be queried for a specific calibration object pose, i.e., you must pass a valid
index pair [CalibObjIdx, CalibObjPoseIdx] in ItemIdx:
’init_pose’: Initial calibration object pose. It is computed internally based on observation poses during the
calibration process (see Calibration). This pose is relative to the current reference camera.
’pose’: Optimized calibration object pose, relative to current reference camera.
’pose_labels’: A convenience list of labels for the entries returned by ’pose’.
These parameters cannot be explicitly modified and can only be queried after calibrate_cameras was
executed.
ItemType=’tool’: The following parameters can be queried with DataName and are returned in DataValue:
’tool_in_base_pose’: Pose of the robot tool in robot base coordinates with Index ItemIdx. These poses
were previously set using set_calib_data and served as input for the hand-eye calibration algo-
rithm.
HALCON 24.11.1.0
450 CHAPTER 6 CALIBRATION
’tool_in_base_pose_corrected’: Corrected pose of the robot tool in robot base coordinates of the input
’tool_in_base_pose’ with Index ItemIdx. This parameter is only available if ’optimization_method’ is
set to ’stochastic’ and after calibrate_hand_eye was executed.
’tool_translation_deviation’, ’tool_rotation_deviation’: Standard deviations of the input poses of the robot
tool in robot base coordinates. ItemIdx has to be set to ’general’. This parameter is only available if
’optimization_method’ is set to ’stochastic’ and after calibrate_hand_eye was executed.
After performing a successful hand-eye calibration using calibrate_hand_eye, the following poses can be
queried for a calibration data model of type:
’hand_eye_moving_cam’, ’hand_eye_scara_moving_cam’: For ItemType=’camera’ and
DataName=’tool_in_cam_pose’, the pose of the robot tool in the camera coordinate system is re-
turned in DataValue. For ItemType=’calib_obj’ and DataName=’obj_in_base_pose’, the pose of the
calibration object in the robot base coordinate system is returned in DataValue.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of
’obj_in_base_pose’. To eliminate this ambiguity the Z translation ’obj_in_base_pose’ is internally set to
0.0 and the ’tool_in_cam_pose’ is calculated accordingly. It is necessary to determine the true translation in
Z after the calibration (see calibrate_hand_eye).
The standard deviations and the covariance matrices of the 6 pose parameters of both poses can
be queried with ’tool_in_cam_pose_deviations’, ’tool_in_cam_pose_covariances’ (ItemType=’camera’),
’obj_in_base_pose_deviations’, and ’obj_in_base_pose_covariances’ (ItemType=’calib_obj’). Like
poses, they are specified in the units [m] and [°].
’hand_eye_stationary_cam’, ’hand_eye_scara_stationary_cam’: For ItemType=’camera’ and
DataName=’base_in_cam_pose’, the pose of the robot base in the camera coordinate system is re-
turned in DataValue. For ItemType=’calib_obj’ and DataName=’obj_in_tool_pose’, the pose of the
calibration object in the robot tool coordinate system is returned in DataValue.
Note that when calibrating SCARA robots, it is not possible to determine the Z translation of
’obj_in_tool_pose’. To eliminate this ambiguity the Z translation of ’obj_in_tool_pose’ is internally set
to 0.0 and the ’base_in_cam_pose’ is calculated accordingly. It is necessary to determine the true translation
in Z after the calibration (see calibrate_hand_eye).
The standard deviations and the covariance matrices of the 6 pose parameters of both poses can be
queried with ’base_in_cam_pose_deviations’, ’base_in_cam_pose_covariances’ (ItemType=’camera’),
’obj_in_tool_pose_deviations’, and ’obj_in_tool_pose_covariances’ (ItemType=’calib_obj’). Like poses,
they are specified in the units [m] and [°].
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. ItemType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of calibration data item.
Default: ’camera’
List of values: ItemType ∈ {’model’, ’camera’, ’calib_obj’, ’calib_obj_pose’, ’tool’}
. ItemIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the affected item (depending on the selected ItemType).
Default: 0
Suggested values: ItemIdx ∈ {0, 1, 2, ’general’}
. DataName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string
The name of the inspected data.
Default: ’params’
List of values: DataName ∈ {’type’, ’reference_camera’, ’num_cameras’, ’num_calib_objs’,
’camera_setup_model’, ’camera_calib_error’, ’camera_calib_error_corrected_tool’, ’hand_eye_calib_error’,
’hand_eye_calib_error_corrected_tool’, ’optimization_method’, ’num_marks’, ’x’, ’y’, ’z’, ’params’, ’pose’,
’init_params’, ’init_pose’, ’params_deviations’, ’params_covariances’, ’params_labels’, ’pose_labels’,
’calib_settings’, ’excluded_settings’, ’common_motion_vector’, ’tool_in_cam_pose’, ’obj_in_base_pose’,
’base_in_cam_pose’, ’obj_in_tool_pose’, ’tool_in_base_pose’, ’tool_in_cam_pose_deviations’,
’obj_in_base_pose_deviations’, ’base_in_cam_pose_deviations’, ’obj_in_tool_pose_deviations’,
’tool_in_cam_pose_covariances’, ’obj_in_base_pose_covariances’, ’base_in_cam_pose_covariances’,
’obj_in_tool_pose_covariances’, ’tool_translation_deviation’, ’tool_rotation_deviation’,
’tool_in_base_pose_corrected’}
Execution Information
• Calibration plates with hexagonally arranged marks: Special mark hexagon (i.e., a mark and its six neighbors)
where either four or six marks contain a hole, see create_caltab.
• Calibration plates with rectangularly arranged marks: The border of the calibration plate with a triangle in
one corner.
HALCON 24.11.1.0
452 CHAPTER 6 CALIBRATION
Parameters
. Contours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Contour-based result(s).
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. ContourName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of contour objects to be returned.
Default: ’marks’
List of values: ContourName ∈ {’marks’, ’caltab’, ’last_caltab’, ’marks_with_hole’}
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration plate.
Default: 0
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object pose.
Default: 0
Execution Information
’num_cameras’: Number of cameras described in the model. The number of cameras is fixed with the creation of
the camera setup model and cannot be changed after that (see create_camera_setup_model).
’camera_calib_error’: The root mean square error (RMSE) of the back projection of the optimization of the
camera system. This error is identical with the error returned by calibrate_cameras.
’reference_camera’: Returns the index of the camera that has been defined as reference camera within the sys-
tem. If no reference camera has been specified using set_camera_setup_param, the index 0 is re-
turned. If the coordinate system has been moved by setting a pose with the parameter ’coord_transf_pose’
in set_camera_setup_param, the origin of the coordinate system is not located in any of the available
cameras. Therefore, the index -1 is returned.
’coord_transf_pose’: Returns the pose in which the coordinate system of the setup has been moved. Please
note that after setting a reference camera with set_camera_setup_param, the pose of this camera
is returned. Adjusting this coordinate system subsequently using the parameter ’coord_transf_pose’ in
set_camera_setup_param yields a pose that corresponds to the location and orientation of the desired
coordinate system relative to the current one.
Camera parameters:
By setting CameraIdx to a valid setup camera index (a value between 0 and NumCameras-1) and
GenParamName to one of the following values, camera-specific parameters are returned in GenParamValue:
HALCON 24.11.1.0
454 CHAPTER 6 CALIBRATION
’pose’: Camera pose relative to the setup’s coordinate system (see create_camera_setup_model for more
details).
Note that the camera needs to be set first by set_camera_setup_cam_param, before any of its parameters
can be inspected by get_camera_setup_param. If CameraIdx is an index of an undefined camera, the
operator returns an error.
For more information about the calibration process of your camera setup see the chapter Calibration.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Index of the camera in the setup.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2, ’general’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Names of the generic parameters to be queried.
List of values: GenParamName ∈ {’camera_calib_error’, ’type’, ’params’, ’params_deviations’,
’params_covariances’, ’pose’, ’reference_camera’, ’coord_transf_pose’, ’num_cameras’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / integer / string
Values of the generic parameters to be queried.
Execution Information
Module
Calibration
Query information about the relations between cameras, calibration objects, and calibration object poses.
A calibration data model (CalibDataID) contains a collection of observations, which are added to the model
by set_calib_data_observ_points. Each observation is associated to an observing camera, an observed
calibration object, and a calibration object pose. With the operator query_calib_data_observ_indices,
you can query observation indices associated to a camera or an calibration object, depending on the parameter
ItemType.
For ItemType=’camera’, you must pass a valid camera index in ItemIdx. Then, Index1 returns a list of
calibration object indices and Index2 returns a list of pose indices. Each pair [Index1[I],Index2[I]]
represents calibration object pose that are ’observed’ by camera ItemIdx.
For ItemType=’calib_obj’, you must specify a valid calibration object index in ItemIdx. Then, Index1
returns a list of camera indices and Index2 returns a list of corresponding calibration object pose indices. Each
pair [Index1[I],Index2[I]] denotes that camera Index1[I] is observing the Index2[I]th pose of
calibration object ItemIdx.
This operator is particularly suitable for accessing observation data of a calibration data model whose configuration
is unknown at the moment of its usage (e.g., if it was just read from a file). As a special case, this operator can be
used to get the precise list of poses of one calibration object (see the example).
Parameters
Execution Information
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Module
Calibration
HALCON 24.11.1.0
456 CHAPTER 6 CALIBRATION
HALCON 24.11.1.0
458 CHAPTER 6 CALIBRATION
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observing camera.
Default: 0
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object.
Default: 0
. CalibObjPoseIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Index of the observed calibration object pose.
Default: 0
Execution Information
• CalibDataID
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_calib_data
Module
Calibration
serialize_camera_setup_model (
: : CameraSetupModelID : SerializedItemHandle )
HALCON 24.11.1.0
460 CHAPTER 6 CALIBRATION
The parameter ItemIdx lets you select whether the new value should be set for all items of a type or only for an
individual one. The parameters to set are passed in DataName, their values in DataValue.
To get detailed information about the calibration process of your camera setup see the chapter Calibration.
Model-related data
Camera-related data
ItemType=’camera’: ItemIdx determines, if data is set for all cameras in general or for a specific camera.
With ItemIdx=’general’, the new settings are applied to all cameras in the model. If you pass a valid
camera index instead, i.e., a number between 0 and NumCameras-1 (NumCameras is specified during
model creation with create_calib_data), only the specified camera is affected by the changes.
By selecting the following parameters in DataName, you can specify which camera parameters shall be
optimized during the calibration performed by calibrate_cameras:
’calib_settings’: The camera parameters listed in DataValue are marked for optimization for the affected
camera(s) (additionally to the camera parameters that were already marked for optimization). Note that
by default, all parameters are marked for the optimization. That is, ’calib_settings’ is mainly suited to
add previously excluded parameters again.
’excluded_settings’: The camera parameters listed in DataValue are excluded from the optimization for
the affected camera(s).
The following camera parameters can be passed in DataValue. See Calibration for affected camera types
and further details about the parameters.
Internal camera parameters
’focus’: Focal length of the lens.
’magnification’: Magnification of the lens.
’kappa’: Divisional distortion coefficient kappa.
’k1’,’k2’,’k3’: Polynomial radial distortion parameters.
’poly_tan_2’: An alias parameter for all polynomial tangential distortion parameters, i.e., p1 and p2.
’poly’: An alias parameter for all polynomial distortion parameters, i.e., k1, k2, k3, p1, and p2.
’image_plane_dist’: The distance of the tilted image plane from the perspective projection center.
’tilt’: Tilt and rotation of the tilt lens.
’cx’,’cy’: Coordinates of the camera’s principal point.
’principal_point’: An alias parameter for ’cx’ and ’cy’.
’sx’,’sy’: Sensor element dimensions.
’params’: All internal camera parameters.
External camera parameters
ItemType=’calib_obj_pose’: ItemIdx determines, if data is set for all calibration object poses in general or
for a specific calibration object pose. With ItemIdx=’general’ the new settings are applied to all calibration
object poses in the model. If you pass a valid calibration object pose index instead, i.e., a tuple containing a
valid index pair [CalibObjIdx, CalibObjPoseIdx], you specify a calibration object pose, which is
affected by the changes.
By selecting the following parameters in DataName, you can specify which calibration object pose pa-
rameters shall be optimized during the calibration performed by calibrate_cameras:
’calib_settings’: The calibration object pose settings listed in DataValue are marked for optimization for
the affected pose(s). Note that by default, all calibration pose parameters are marked for the optimization.
That is, ’calib_settings’ is mainly suited to add previously excluded parameters again.
’excluded_settings’: The calibration object pose settings listed in DataValue are excluded from the opti-
mization for the affected pose(s).
The following calibration pose parameters can be passed in DataValue:
’alpha’,’beta’,’gamma’: Rotation part of the calibration object pose.
’transx’,’transy’,’transz’: Translation part of the calibration object pose.
’pose’: All calibration object pose parameters.
’all’: All calibration objects optimization parameters, i.e., the same as ’pose’.
By default all parameters are marked for calibration.
The current settings for any model item can be queried with the operator get_calib_data.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. ItemType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of calibration data item.
Default: ’model’
List of values: ItemType ∈ {’model’, ’camera’, ’calib_obj_pose’, ’tool’}
. ItemIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer / string
Index of the affected item (depending on the selected ItemType).
Default: ’general’
Suggested values: ItemIdx ∈ {0, 1, 2, ’general’}
HALCON 24.11.1.0
462 CHAPTER 6 CALIBRATION
Execution Information
as a file name: it specifies a calibration plate description file as created with create_caltab or
gen_caltab.
as a numerical tuple: it specifies the 3D coordinates of all points of the calibration object. All X, Y, and Z
coordinates, respectively, of all points must be packed sequentially in the tuple in form: [X, Y, Z], i.e.,
[X1, ..., Xn, Y1, ..., Yn, Z1, ..., Zn], where |X| = |Y| = |Z| and all coordinates
are in meters.
To query the calibration objects parameters stored earlier in a calibration data model, use get_calib_data.
To get detailed information about the calibration process of your camera setup see the chapter Calibration.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CalibObjIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Calibration object index.
Default: 0
Suggested values: CalibObjIdx ∈ {0, 1, 2}
. CalibObjDescr (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
3D point coordinates or a description file name.
List of values: CalibObjDescr ∈ {’calplate.cpd’, ’calplate_5mm.cpd’, ’calplate_10mm.cpd’,
’calplate_20mm.cpd’, ’calplate_40mm.cpd’, ’calplate_80mm.cpd’, ’calplate_160mm.cpd’,
’calplate_320mm.cpd’, ’calplate_640mm.cpd’, ’calplate_1200mm.cpd’, ’calplate_20mm_dark_on_light.cpd’,
’calplate_40mm_dark_on_light.cpd’, ’calplate_80mm_dark_on_light.cpd’, ’caltab.descr’,
’caltab_650um.descr’, ’caltab_2500um.descr’, ’caltab_6mm.descr’, ’caltab_10mm.descr’,
’caltab_30mm.descr’, ’caltab_100mm.descr’, ’caltab_200mm.descr’, ’caltab_800mm.descr’,
’caltab_small.descr’, ’caltab_big.descr’}
Execution Information
HALCON 24.11.1.0
464 CHAPTER 6 CALIBRATION
An overview of all available camera types and their respective parameters is given in CameraParam.
The camera type can be queried later by calling get_calib_data with the arguments ItemType=’camera’
and DataName=’type’. The initial camera parameters can be queried by calling get_calib_data with argu-
ments ItemType=’camera’ and DataName=’init_params’.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / string
Camera index.
Default: 0
Suggested values: CameraIdx ∈ {’all’, 0, 1, 2}
. CameraType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Type of the camera.
Default: []
List of values: CameraType ∈ {[]}
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Initial camera internal parameters.
Execution Information
Row, Column, Index: Extracted image coordinates and corresponding index of the calibration marks of the
calibration object. Row and Column are tuples containing the same number of elements. Index can either
contain a tuple (of the same length) or the value ’all’, indicating that the points [Row, Column] correspond
in a one-to-one relation to the calibration marks of the calibration object. If the number of row or column
coordinates does not match the number of calibration marks, a corresponding error message is returned.
Pose: A roughly estimated pose of the observed calibration object relative to observing camera.
If you are using the HALCON calibration plate, it is recommended to use find_calib_object instead of
set_calib_data_observ_points, since the contour information, which it stores in the calibration data
model, enables a more precise calibration procedure with calibrate_cameras.
The observation data can be accessed later by calling get_calib_data_observ_points using the same
values for the arguments CameraIdx, CalibObjIdx, and CalibObjPoseIdx.
Parameters
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
find_marks_and_pose, set_calib_data_cam_param, set_calib_data_calib_object
Possible Successors
set_calib_data, calibrate_cameras
HALCON 24.11.1.0
466 CHAPTER 6 CALIBRATION
Alternatives
find_calib_object
Module
Calibration
Define type, parameters, and relative pose of a camera in a camera setup model.
The operator set_camera_setup_cam_param defines the internal parameters and the pose of the camera
with CameraIdx in the camera setup model CameraSetupModelID. The parameter CameraIdx must be
between 0 and NumCameras-1 (see get_camera_setup_param with argument ’num_cameras’). If a cam-
era with CameraIdx was already defined, its parameters are overwritten by the current ones (the camera is
’substituted’).
The number of values in CameraParam depends on the camera type. See the description of
set_calib_data_cam_param for a list of values and Calibration for details on camera types and camera
parameters.
The parameter CameraType is only provided for backwards compatibility. The information about the camera
type is contained in the first element of CameraParam. Therefore, CameraType should be set either to its
default value [] (the recommended option) or to the same value as the first element of CameraParam. In any
other case an error is raised.
The parameter CameraPose specifies the pose of the camera relative to the setup’s coordinate system (see
set_camera_setup_param for further explanations on the setup’s coordinate system).
All of the parameters set by set_camera_setup_cam_param can be read back by
get_camera_setup_param. While the camera type can be changed only with a new
call to set_camera_setup_cam_param, all other camera parameters can be modified by
set_camera_setup_param. Furthermore, set_camera_setup_param can set additional data to
a camera: standard deviations or covariances of the internal camera parameters.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Index of the camera in the setup.
Suggested values: CameraIdx ∈ {0, 1, 2}
. CameraType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Type of the camera.
Default: []
List of values: CameraType ∈ {[]}
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. CameraPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer
Pose of the camera relative to the setup’s coordinate system.
Number of elements: 7
Execution Information
Module
Calibration
’reference_camera’: When setting GenParamValue to a valid camera index, all camera poses are recomputed
relative to the coordinate system of this camera.
’coord_transf_pose’: When passing a tuple in HALCON pose format in GenParamValue, the current coordi-
nate system is moved into this pose. The pose in GenParamValue represents the location and orientation
of the desired coordinate system relative to the current one. All camera poses are recomputed relative to the
new coordinate system.
The recomputed camera poses can be inspected with the operator get_camera_setup_param.
Camera parameters:
By setting CameraIdx to a valid setup camera index (a value between 0 and NumCameras-1) and
GenParamName to one of the following values, camera specific parameters can be set with GenParamValue:
Note that the camera must already be defined in the model, before any of its parameters can be changed by
set_camera_setup_param. If CameraIdx is an index of a undefined camera, the operator returns an error.
All parameters can be read back by get_camera_setup_param.
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. CameraIdx (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Unique index of the camera in the setup.
Default: 0
Suggested values: CameraIdx ∈ {0, 1, 2, ’general’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Names of the generic parameters to be set.
List of values: GenParamName ∈ {’params’, ’params_deviations’, ’params_covariances’, ’pose’,
’reference_camera’, ’coord_transf_pose’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; real / integer / string
Values of the generic parameters to be set.
Execution Information
HALCON 24.11.1.0
468 CHAPTER 6 CALIBRATION
Possible Predecessors
create_camera_setup_model, read_camera_setup_model
Module
Calibration
Note that no calibration results are stored in the file. You can access them with the operator get_calib_data,
either as individual items or in form of a camera setup model and store them separately.
The calibration data model can be later read with read_calib_data.
Parameters
. CalibDataID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . calib_data ; handle
Handle of a calibration data model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
The file name of the model to be saved.
File extension: .ccd
Execution Information
Parameters
. CameraSetupModelID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . camera_setup_model ; handle
Handle to the camera setup model.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
The file name of the model to be saved.
File extension: .csm
Execution Information
Module
Calibration
6.8 Projection
Convert internal camera parameters and a 3D pose into a 3×4 projection matrix.
cam_par_pose_to_hom_mat3d converts the internal camera parameters CameraParam and the 3D pose
Pose, which represent the external camera parameters, into the 3×4 projection matrix HomMat3D, which can
be used to project points from 3D to 2D. The conversion can only be performed if the distortion coefficients in
CameraParam are 0. If necessary, change_radial_distortion_cam_par must be used to achieve this.
The internal camera parameters and the pose are typically obtained with calibrate_cameras.
Parameters
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. Pose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose.
Number of elements: 7
. HomMat3D (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d ; real
3×4 projection matrix.
Result
cam_par_pose_to_hom_mat3d returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary,
an exception is raised
Execution Information
HALCON 24.11.1.0
470 CHAPTER 6 CALIBRATION
Result
project_3d_point returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an exception
is raised.
Execution Information
Module
Calibration
To transform the homogeneous coordinates to Euclidean coordinates, they must be divided by Qw:
!
Qx
Ex Qw
= Qy
Ey Qw
HALCON 24.11.1.0
472 CHAPTER 6 CALIBRATION
Possible Predecessors
cam_par_pose_to_hom_mat3d
Alternatives
project_point_hom_mat3d, project_3d_point
Module
Foundation
If a point on the line at infinity (T w = 0) is created by the transformation, an error is returned. If this is undesired,
project_hom_point_hom_mat3d can be used.
Note that, consistent with the conventions used by the projection in calibrate_cameras, Qx corresponds to
the column coordinate of an image and Qy corresponds to the row coordinate.
Parameters
. HomMat3D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat3d ; real
3×4 projection matrix.
. Px (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (x coordinate).
. Py (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (y coordinate).
. Pz (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Input point (z coordinate).
. Qx (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Output point (x coordinate).
. Qy (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real
Output point (y coordinate).
Execution Information
6.9 Rectification
’fixed’: Only the distortion coefficients are modified, the other internal camera parameters remain unchanged. In
general, this leads to a change of the visible part of the scene.
’fullsize’: For area scan cameras, the scale factors Sx andSy and the image center point (Cx , Cy )T are modified in
order to preserve the visible part of the scene. For line scan cameras with telecentric lenses, the scale factor
Sx , the image center point (Cx , Cy )T , and the Vy component of the motion vector are changed to achieve the
this effect. Thus, all points visible in the original image are also visible in the modified (rectified) image. In
general, this leads to undefined pixels in the modified image.
’adaptive’: A trade-off between the other modes: The visible part of the scene is slightly reduced to prevent
undefined pixels in the modified image. The same parameters as for ’fullsize’ are modified.
’preserve_resolution’: As in the mode ’fullsize’, all points visible in the original image are also visible in the
modified (rectified) image. For area scan cameras, the scale factors Sx and Sy and the image center point
(Cx , Cy )T are modified. For line scan cameras with telecentric lenses, the scale factor Sx , the image center
point (Cx , Cy )T , and potentially the Vy component of the motion vector are changed to achieve the this
effect. In general, this leads to undefined pixels in the modified image. In contrast to the mode ’fullsize’,
additionally the size of the modified image is increased such that the image resolution does not decrease in
any part of the image.
In all modes, the distortion coefficients in CamParamOut are set to DistortionCoeffs. For telecentric line
scan cameras, the motion vector also influences the percieved distortion. For example, a nonzero Vx component
leads to skewed pixels. Furthermore, if Vy 6= Sx /Magnification, the pixels appear to be non-square. Therefore, for
telecentric line scan cameras, up to three more components can be passed in addition to κ or (K1 , K2 , K3 , P1 , P2 ),
respectively, in DistortionCoeffs. These specify the new Vx , Vy , and Vz components of the motion vector.
The transformation of a pixel in the modified image into the image plane using CamParamOut results in the same
point as the transformation of a pixel in the original image via CamParamIn.
Parameters
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode
Default: ’adaptive’
Suggested values: Mode ∈ {’fullsize’, ’adaptive’, ’fixed’, ’preserve_resolution’}
. CamParamIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters (original).
. DistortionCoeffs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer
Desired radial distortions.
Number of elements: DistortionCoeffs == 1 || DistortionCoeffs == 5
Default: 0.0
. CamParamOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters (modified).
Result
change_radial_distortion_cam_par returns 2 (H_MSG_TRUE) if all parameter values are correct. If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
474 CHAPTER 6 CALIBRATION
change_radial_distortion_contours_xld (
Contours : ContoursRectified : CamParamIn, CamParamOut : )
change_radial_distortion_image ( Image,
Region : ImageRectified : CamParamIn, CamParamOut : )
HALCON 24.11.1.0
476 CHAPTER 6 CALIBRATION
See also
change_radial_distortion_cam_par, camera_calibration, read_cam_par,
change_radial_distortion_contours_xld, change_radial_distortion_image
Module
Calibration
contour_to_world_plane_xld (
Contours : ContoursTrans : CameraParam, WorldPose, Scale : )
Transform an XLD contour into the plane z=0 of a world coordinate system.
The operator contour_to_world_plane_xld transforms contour points given in Contours into the plane
z=0 in a world coordinate system and returns the 3D contour points in ContoursTrans. The world coordinate
system is chosen by passing its 3D pose relative to the camera coordinate system in WorldPose. Hence, latter
one is expected in the form ccs Pwcs , where ccs denotes the camera coordinate system and wcs the world coordinate
system (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). In CameraParam
you must pass the internal camera parameters (see Calibration for the sequence of the parameters and the underly-
ing camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’um’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image point in the
camera coordinate system, taking into account the radial distortions. The line of sight is then transformed into the
world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the 3D
coordinates of the transformed contour ContoursTrans are obtained.
Parameters
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Input XLD contours to be transformed in image coordinates.
. ContoursTrans (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont(-array) ; object
Transformed XLD contours in world coordinates.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or dimension
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6, 0.0254, 0.3048,
0.9144}
Restriction: Scale > 0
Example
Result
contour_to_world_plane_xld returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary,
an exception is raised.
Execution Information
HALCON 24.11.1.0
478 CHAPTER 6 CALIBRATION
Generate a projection map that describes the mapping between the image plane and the plane z=0 of a world
coordinate system.
gen_image_to_world_plane_map generates a projection map Map, which describes the mapping between
the image plane and the plane z=0 (plane of measurements) in a world coordinate system. This map can be used
to rectify an image with the operator map_image. The rectified image shows neither radial nor perspective
distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly onto the
plane of measurements. The world coordinate system (wcs) is chosen by passing its 3D pose relative to the camera
coordinate system (ccs) in WorldPose. Thus the pose is expected in the form ccs Pwcs (see Transformations
/ Poses and “Solution Guide III-C - 3D Vision”). In CameraParam you must pass the internal
camera parameters (see Calibration for the sequence of the parameters and the underlying camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
The size of the images to be mapped can be specified by the parameters WidthIn and HeightIn. The pixel
position of the upper left corner of the output image is determined by the origin of the world coordinate system.
The size of the output image can be chosen by the parameters WidthMapped, HeightMapped, and Scale.
WidthMapped and HeightMapped must be given in pixels.
The parameter Scale can be used to specify the size of a pixel in the transformed image. There are two ways to
use this parameter:
The mapping function is stored in the output image Map. Map has the same size as the resulting images after the
mapping. MapType is used to specify the type of the output Map. If ’nearest_neighbor’ is chosen, Map consists
of one image containing one channel, in which for each pixel of the resulting image the linearized coordinate
of the pixel of the input image is stored that is the nearest neighbor to the transformed coordinates. If ’bilinear’
interpolation is chosen, Map consists of one image containing five channels. In the first channel for each pixel in the
resulting image the linearized coordinates of the pixel in the input image is stored that is in the upper left position
relative to the transformed coordinates. The four other channels contain the weights of the four neighboring pixels
of the transformed coordinates which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates. If ’coord_map_sub_pix’ is chosen, Map consists of one vector field image of the semantic
type ’vector_field_absolute’, in which for each pixel of the resulting image the subpixel precise coordinates in the
input image are stored.
If several images have to be mapped using the same camera parameters, gen_image_to_world_plane_map
in combination with map_image is much more efficient than the operator image_to_world_plane because
the mapping function needs to be computed only once.
If you want to re-use the created map in another program, you can save it as a multi-channel image with the
operator write_image, using the format ’tiff’.
Parameters
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / int8 / uint2 / vector_field
Image containing the mapping data.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7
. WidthIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images to be transformed.
Restriction: WidthIn >= 1
. HeightIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the images to be transformed.
Restriction: HeightIn >= 1
. WidthMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .extent.x ; integer
Width of the resulting mapped images in pixels.
Restriction: WidthMapped >= 1
. HeightMapped (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.y ; integer
Height of the resulting mapped images in pixels.
Restriction: HeightMapped >= 1
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or unit.
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6, 0.0254, 0.3048,
0.9144}
Restriction: Scale > 0
. MapType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the mapping.
Default: ’bilinear’
List of values: MapType ∈ {’nearest_neighbor’, ’bilinear’, ’coord_map_sub_pix’}
Example
* Calibrate camera.
calibrate_cameras (CalibDataID, Error)
* Obtain camera parameters.
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Example values, if no calibration data is available:
CamParam := ['area_scan_division', 0.0087, -1859, 8.65e-006, 8.6e-006, \
362.5, 291.6, 768, 576]
* Get reference pose (pose 4 of calibration object 0).
get_calib_data (CalibDataID, 'calib_obj_pose',\
[0,4], 'pose', Pose)
* Example values, if no calibration data is available:
Pose := [-0.11, -0.21, 2.51, 352.73, 346.73, 336.48, 0]
* Compensate thickness of plate.
set_origin_pose (Pose, -1.125, -1.0, 0, PoseNewOrigin)
* Transform the image into the world plane.
read_image (Image, 'calib/calib-3d-coord-04')
gen_image_to_world_plane_map (MapSingle, CamParam, PoseNewOrigin,\
CamParam[6], CamParam[7], 900, 800, 0.0025, 'bilinear')
map_image (Image, MapSingle, ImageMapped)
Result
gen_image_to_world_plane_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception is raised.
HALCON 24.11.1.0
480 CHAPTER 6 CALIBRATION
Execution Information
Generate a projection map that describes the mapping of images corresponding to a changing radial distortion.
gen_radial_distortion_map computes the mapping of images corresponding to a changing radial distor-
tion in accordance to the internal camera parameters CamParamIn and CamParamOut which can be obtained,
e.g., using the operator calibrate_cameras. CamParamIn and CamParamOut contain the old and the
new camera parameters including the old and the new radial distortion, respectively (also see Calibration for the
sequence of the parameters and the underlying camera model). Each pixel of the potential output image is trans-
formed into the image plane using CamParamOut and subsequently projected into a subpixel position of the
potential input image using CamParamIn. Note that gen_radial_distortion_map can only be used with
area scan cameras.
The mapping function is stored in the output image Map. The size of Map is given by the camera parameters
CamParamOut and therefore defines the size of the resulting mapped images using map_image. The size of
the images to be mapped with map_image is determined by the camera parameters CamParamIn. MapType is
used to specify the type of the output Map. If ’nearest_neighbor’ is chosen, Map consists of one image containing
one channel, in which for each pixel of the resulting image the linearized coordinate of the pixel of the input
image is stored that is the nearest neighbor to the transformed coordinates. If ’bilinear’ interpolation is chosen,
Map consists of one image containing five channels. In the first channel for each pixel in the resulting image
the linearized coordinates of the pixel in the input image is stored that is in the upper left position relative to
the transformed coordinates. The four other channels contain the weights of the four neighboring pixels of the
transformed coordinates which are used for the bilinear interpolation, in the following order:
2 3
4 5
The second channel, for example, contains the weights of the pixels that lie to the upper left relative to the trans-
formed coordinates. If ’coord_map_sub_pix’ is chosen, Map consists of one vector field image of the semantic
type ’vector_field_absolute’, in which for each pixel of the resulting image the subpixel precise coordinates in the
input image are stored.
If CamParamOut was computed via change_radial_distortion_cam_par, the mapping describes the
effect of a lens with a modified radial distortion κ. If κ = 0, the mapping corresponds to a rectification. A
subsequent pose estimation (determination of the external camera parameters) is not affected by this operation.
If several images have to be mapped using the same camera parameters, gen_radial_distortion_map
in combination with map_image is much more efficient than the operator
change_radial_distortion_image because the transformation must be computed only once.
If you want to re-use the created map in another program, you can save it as a multi-channel image with the
operator write_image, using the format ’tiff’.
Parameters
. Map (output_object) . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image ; object : int4 / int8 / uint2 / vector_field
Image containing the mapping data.
. CamParamIn (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Old camera parameters.
. CamParamOut (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
New camera parameters.
. MapType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the mapping.
Default: ’bilinear’
List of values: MapType ∈ {’nearest_neighbor’, ’bilinear’, ’coord_map_sub_pix’}
Result
gen_radial_distortion_map returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary,
an exception is raised.
Execution Information
Possible Predecessors
change_radial_distortion_cam_par, camera_calibration, hand_eye_calibration
Possible Successors
map_image
Alternatives
change_radial_distortion_image
See also
change_radial_distortion_contours_xld
Module
Calibration
Transform image points into the plane z=0 of a world coordinate system.
The operator image_points_to_world_plane transforms image points which are given in Rows and Cols
into the plane z=0 in a world coordinate system and returns their 3D coordinates in X and Y. The world coordinate
system is chosen by passing its pose relative to the camera coordinate system in WorldPose. Hence, latter one is
expected in the form ccs Pwcs , where ccs denotes the camera coordinate system and wcs the world coordinate sys-
tem (see Transformations / Poses and “Solution Guide III-C - 3D Vision”). In CameraParam you
must pass the internal camera parameters (see Calibration for the sequence of the parameters and the underlying
camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
With the parameter Scale you can scale the resulting 3D coordinates. The parameter Scale must be specified
as the ratio desired unit/original unit. The original unit is determined by the coordinates of the calibration object.
If the original unit is meters (which is the case if you use the standard calibration plate), you can set the desired
unit directly by selecting ’m’, ’cm’, ’mm’ or ’um’ for the parameter Scale.
Internally, the operator first computes the line of sight between the projection center and the image contour points
in the camera coordinate system, taking into account the radial distortions. The line of sight is then transformed
HALCON 24.11.1.0
482 CHAPTER 6 CALIBRATION
into the world coordinate system specified in WorldPose. By intersecting the plane z=0 with the line of sight the
3D coordinates X and Y are obtained.
It is recommended to use only those image points Rows and Cols, that lie within the calibrated image size. The
mathematical model does only work well for image points, that lie within the calibration range.
Parameters
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7
. Rows (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; real / integer
Row coordinates of the points to be transformed.
Default: 100.0
. Cols (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; real / integer
Column coordinates of the points to be transformed.
Default: 100.0
. Scale (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer / real
Scale or dimension
Default: ’m’
Suggested values: Scale ∈ {’m’, ’cm’, ’mm’, ’microns’, ’um’, 1.0, 0.01, 0.001, 1.0e-6, 0.0254, 0.3048,
0.9144}
Restriction: Scale > 0
. X (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.x-array ; real
X coordinates of the points in the world coordinate system.
. Y (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . coordinates.y-array ; real
Y coordinates of the points in the world coordinate system.
Example
Result
image_points_to_world_plane returns 2 (H_MSG_TRUE) if all parameter values are correct. If neces-
sary, an exception is raised.
Execution Information
Rectify an image by transforming it into the plane z=0 of a world coordinate system.
image_to_world_plane rectifies an image Image by transforming it into the plane z=0 (plane of mea-
surements) in a world coordinate system. The resulting rectified image ImageWorld shows neither radial nor
perspective distortions; it corresponds to an image acquired by a distortion-free camera that looks perpendicularly
onto the plane of measurements. The world coordinate system is chosen by passing its 3D pose relative to the
camera coordinate system in WorldPose. Hence, latter one is expected in the form ccs Pwcs , where ccs denotes
the camera coordinate system and wcs the world coordinate system (see Transformations / Poses and “Solution
Guide III-C - 3D Vision”). In CameraParam you must pass the internal camera parameters (see Cali-
bration for the sequence of the parameters and the underlying camera model).
In many cases CameraParam and WorldPose are the result of calibrating the camera with the operator
calibrate_cameras. See below for an example.
The pixel position of the upper left corner of the output image ImageWorld is determined by the origin of the
world coordinate system. The size of the output image ImageWorld can be chosen by the parameters Width,
Height, and Scale. Width and Height must be given in pixels.
The parameter Scale can be used to specify the size of a pixel in the transformed image. There are two ways to
use this parameter:
The parameter Interpolation specifies, whether bilinear interpolation (’bilinear’) should be applied between
the pixels in the input image or whether the gray value of the nearest neighboring pixel (’nearest_neighbor’) should
be used.
If several images have to be rectified using the same parameters, gen_image_to_world_plane_map in
combination with map_image is much more efficient than the operator image_to_world_plane because
the mapping function needs to be computed only once.
Attention
image_to_world_plane can be executed on OpenCL devices if the input image does not exceed the maxi-
mum size of image objects of the selected device. There can be slight differences in the output compared to the
execution on the CPU.
Parameters
. Image (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . (multichannel-)image(-array) ; object : byte / uint2 / real
Input image.
. ImageWorld (output_object) . . . . . . . . . . . . . . . . . .(multichannel-)image(-array) ; object : byte / uint2 / real
Transformed image.
. CameraParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . campar ; real / integer / string
Internal camera parameters.
. WorldPose (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pose ; real / integer
3D pose of the world coordinate system in camera coordinates.
Number of elements: 7
HALCON 24.11.1.0
484 CHAPTER 6 CALIBRATION
* Calibrate camera.
calibrate_cameras (CalibDataID, Error)
* Obtain camera parameters.
get_calib_data (CalibDataID, 'camera', 0, 'params', CamParam)
* Example values, if no calibration data is available:
CamParam := ['area_scan_division', 0.0087, -1859, 8.65e-006, 8.6e-006, \
362.5, 291.6, 768, 576]
* Get reference pose (pose 4 of calibration object 0).
get_calib_data (CalibDataID, 'calib_obj_pose',\
[0,4], 'pose', Pose)
* Example values, if no calibration data is available:
Pose := [-0.11, -0.21, 2.51, 352.73, 346.73, 336.48, 0]
* Compensate thickness of plate.
set_origin_pose (Pose, -1.125, -1.0, 0, PoseNewOrigin)
* Transform the image into the world plane.
read_image (Image, 'calib/calib-3d-coord-04')
image_to_world_plane (Image, ImageWorld, CamParam, PoseNewOrigin,\
900, 800, 0.0025, 'bilinear')
Result
image_to_world_plane returns 2 (H_MSG_TRUE) if all parameter values are correct. If necessary, an ex-
ception is raised.
Execution Information
6.10 Self-Calibration
radial_distortion_self_calibration (
Contours : SelectedContours : Width, Height, InlierThreshold,
RandSeed, DistortionModel, DistortionCenter,
PrincipalPointVar : CameraParam )
m
1 X m
|dj | > InlierThreshold · =T
m j=1 100
The value InlierThreshold describes the mean deviation of a contour from its associated line in pixels for
a contour that contains 100 points. The actual threshold T is derived from InlierThreshold by scaling it
with the reference length (100) and the number of contour points m. Therefore, similar contours are classified
alike. Typical values of InlierThreshold range from 0.05 to 0.5. The higher the value, the more deviation
is tolerated. By choosing the value 0, all the contours of Contours are used for the calibration process. The
RANSAC contour selection will then be suppressed to enable a manual contour selection. This can be helpful if
the outlier percentage is higher than 50 percent.
With the parameter RandSeed, you can control the randomized behavior of the RANSAC algorithm and force
it to return reproducible results. The parameter is passed as initial value to the internally used random number
generator. If it is set to a positive value, the operator returns identical results for each call with identical parameter
values. The value set for the HALCON system variable ’seed_rand’ (see set_system) does not affect the results
of radial_distortion_self_calibration.
radial_distortion_self_calibration returns the contours that were chosen for the calibration pro-
cess in SelectedContours.
HALCON 24.11.1.0
486 CHAPTER 6 CALIBRATION
’variable’ In the default mode ’variable’, the distortion center c is estimated with all the other calibration pa-
rameters at the same time. Here, many contours should lie equally distributed near the image borders or the
distortion should be high. Otherwise, the search for the distortion center could be ill-posed, which results in
instability.
’adaptive’ With the method ’adaptive’, the distortion center c is at first fixed in the image center. Then, the outliers
are eliminated by using the InlierThreshold. Finally, the calibration process is rerun by estimating
(κ, cx , cy ) or (K1 , K2 , K3 , P1 , P2 , cx , cy ) , respectively, which will be accepted if c = (cx , cy ) results from
a stable calibration and lies near the image center. Otherwise, c will be assumed to lie in the image center.
This method should be used if the distortion center can be assumed to lie near the image center and if very
few contours are available or the position of other contours is bad (e.g., the contours have the same direction
or lie in the same image region).
’fixed’ By choosing the method ’fixed’, the distortion center will be assumed fixed in the image center and only
κ or (K1 , K2 , K3 , P1 , P2 ), respectively, will be estimated. This method should be used in case of very weak
distortions or few contours in bad position.
In order to control the deviation of c from the image center, the parameter PrincipalPointVar can be
used in the methods ’adaptive’ and ’variable’. If the deviation from the image center should be controlled,
PrincipalPointVar must lie between 1 and 100. The higher the value, the more the distortion center can
deviate from the image center. By choosing the value 0, the principal point is not controlled, i.e., the principal
point is determined solely based on the contours. The parameter PrincipalPointVar should be used in cases
of weak distortions or similarly oriented contours. Otherwise, a stable solution cannot be guaranteed.
Runtime
The runtime of radial_distortion_self_calibration is shortest for DistortionCenter =
0
variable 0 and PrincipalPointVar = 0 . The runtime for DistortionCenter = 0 variable 0 and
PrincipalPointVar > 0 increases significantly for smaller values of PrincipalPointVar. The run-
times for DistortionCenter = 0 adaptive 0 and DistortionCenter = 0 fixed 0 are also significantly higher
than for DistortionCenter = 0 variable 0 and PrincipalPointVar = 0 .
Attention
Since the polynomial model (DistortionModel = ’polynomial’) uses more parameters than the division model
(DistortionModel = ’division’) the calibration using the polynomial model can be slightly less stable than
the calibration using the division model, which becomes noticeable in the accuracy of the decentering distor-
tion parameters P1 , P2 . To improve the stability, contours of multiple images can be used. Additional sta-
bility can be achieved by setting DistortionCenter = 0 fixed 0 , DistortionCenter = 0 adaptive 0 , or
PrincipalPointVar > 0 , which was already mentioned above.
Parameters
. Contours (input_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; object
Contours that are available for the calibration.
. SelectedContours (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xld_cont-array ; object
Contours that were used for the calibration
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images from which the contours were extracted.
Default: 640
Suggested values: Width ∈ {640, 768}
Restriction: Width > 0
Result
If the parameters are valid, the operator radial_distortion_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information
HALCON 24.11.1.0
488 CHAPTER 6 CALIBRATION
Possible Predecessors
edges_sub_pix, segment_contours_xld
Possible Successors
change_radial_distortion_cam_par, change_radial_distortion_image
See also
camera_calibration
References
T. Thormählen, H. Broszio: “Automatic line-based estimation of radial lens distortion”; in: Integrated Computer-
Aided Engineering; vol. 12; pp. 177-190; 2005.
Module
Calibration
limit that lies significantly below the maximum gray value) the overexposed areas should be masked out by hand
with reduce_domain in the overexposed image.
radiometric_self_calibration returns the inverse gray value response function of the camera in
InverseResponse. The inverse response function can be used to create an image with a linear response
by using InverseResponse as the LUT in lut_trans. The parameter FunctionType determines which
function model is used to model the response function. For FunctionType = 0 discrete 0 , the response func-
tion is described by a discrete function with the relevant number of gray values (256 for byte images). For
FunctionType = 0 polynomial 0 , the response is described by a polynomial of degree PolynomialDegree.
The computation of the response function is slower for FunctionType = 0 discrete 0 . However, since a poly-
nomial tends to oscillate in the areas in which no gray value information can be derived, even if smoothness
constraints are imposed as described below, the discrete model should usually be preferred over the polynomial
model.
The inverse response function is returned as a tuple of integer values for FunctionType = 0 discrete 0 and
FunctionType = 0 polynomial 0 . In some applications, it might be desirable to return the inverse response func-
tion as floating point values to avoid the numerical error that is introduced by rounding. For example, if the inverse
response function must be inverted to obtain the response function of the camera, there is some loss of informa-
tion if the values are returned as integers. For these applications, FunctionType can be set to ’discrete_real’ or
’polynomial_real’, in which case the inverse response function will be returned as a tuple of floating point numbers.
The parameter Smoothness defines (in addition to the constraints on the response function that can be de-
rived from the images) constraints on the smoothness of the response function. If, as described above, the gray
value range can be covered completely and without gaps, the default value of 1 should not be changed. Other-
wise, values > 1 can be used to obtain a stronger smoothing of the response function, while values < 1 lead
to a weaker smoothing. The smoothing is particularly important in areas for which no gray value information
can be derived from the images, i.e., in gaps in the histograms and for gray values smaller than the minimum
gray value of all images or larger than the maximum gray value of all images. In these areas, the smoothness
constraints lead to an interpolation or extrapolation of the response function. Because of the nature of the inter-
nally derived constraints, FunctionType = 0 discrete 0 favors an exponential function in the undefined areas,
whereas FunctionType = 0 polynomial 0 favors a straight line. Please note that the interpolation and extrapo-
lation is always less reliable than to cover the gray value range completely and without gaps. Therefore, in any
case it should be attempted first to acquire the images optimally, before the smoothness constraints are used to
fill in the remaining gaps. In all cases, the response function should be checked for plausibility after the call to
radiometric_self_calibration. In particular, it should be checked whether InverseResponse is
monotonic. If this is not the case, a more suitable scene should be used to avoid interpolation, or Smoothness
should be set to a larger value. For FunctionType = 0 polynomial 0 , it may also be necessary to change
PolynomialDegree. If, despite these changes, an implausible response is returned, the saturation behavior
of the camera should be checked, e.g., based on the 2D gray value histogram, and the saturated areas should be
masked out by hand, as described above.
When the inverse gray value response function of the camera is determined, the absolute energy falling on the
camera cannot be determined. This means that InverseResponse can only be determined up to a scale factor.
Therefore, an additional constraint is used to fix the unknown scale factor: the maximum gray value that can occur
should occur for the maximum input gray value, e.g., InverseResponse[255] = 255 for byte images. This
constraint usually leads to the most intuitive results. If, however, a multichannel image (typically an RGB image)
should be radiometrically calibrated (for this, each channel must be calibrated separately), the above constraint
may lead to the result that a different scaling factor is determined for each channel. This may lead to the result that
gray tones no longer appear gray after the correction. In this case, a manual white balancing step must be carried
out by identifying a homogeneous gray area in the original image, and by deriving appropriate scaling factors from
the corrected gray values for two of the three response curves (or, in general, for n − 1 of the n channels). Here,
the response curve that remains invariant should be chosen such that all scaling factors are < 1. With the scaling
factors thus determined, new response functions should be calculated by multiplying each value of a response
function with the scaling factor corresponding to that response function.
Parameters
HALCON 24.11.1.0
490 CHAPTER 6 CALIBRATION
Result
If the parameters are valid, the operator radiometric_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information
x = PX .
Here, x is a homogeneous 2D vector, X a homogeneous 3D vector, and P a homogeneous 3×4 projection matrix.
The projection matrix P can be decomposed as follows:
P = K(R|t) .
Here, R is a 3×3 rotation matrix and t is an inhomogeneous 3D vector. These two entities describe
the position (pose) of the camera in 3D space. This convention is analogous to the convention used in
camera_calibration, i.e., for R = I and t = 0 the x axis points to the right, the y axis downwards, and
the z axis points forward. K is the calibration matrix of the camera (the camera matrix) which can be described as
follows:
af sf u
K= 0 f v .
0 0 1
Here, f is the focal length of the camera in pixels, a the aspect ratio of the pixels, s is a factor that models the
skew of the image axes, and (u, v) is the principal point of the camera in pixels. In this convention, the x axis
corresponds to the column axis and the y axis to the row axis.
Since the camera is stationary, it can be assumed that t = 0. With this convention, it is easy to see that the
fourth coordinate of the homogeneous 3D vector X has no influence on the position of the projected 3D point.
HALCON 24.11.1.0
492 CHAPTER 6 CALIBRATION
Consequently, the fourth coordinate can be set to 0, and it can be seen that X can be regarded as a point at infinity,
and hence represents a direction in 3D. With this convention, the fourth coordinate of X can be omitted, and hence
X can be regarded as inhomogeneous 3D vector which can only be determined up to scale since it represents a
direction. With this, the above projection equation can be written as follows:
x = KRX .
If two images of the same point are taken with a stationary camera, the following equations hold:
x1 = K1 R1 X
x2 = K2 R2 X
and consequently
x2 = K2 R2 R−1 −1 −1
1 K1 x1 = K2 R12 K1 x1 = H12 x1 .
If the camera parameters do not change when taking the two images, K1 = K2 holds. Because of the above, the
two images of the same 3D point are related by a projective 2D transformation. This transformation can be deter-
mined with proj_match_points_ransac. It needs to be taken into account that the order of the coordinates
of the projective 2D transformations in HALCON is the opposite of the above convention. Furthermore, it needs
to be taken into account that proj_match_points_ransac uses a coordinate system in which the origin
of a pixel lies in the upper left corner of the pixel, whereas stationary_camera_self_calibration
uses a coordinate system that corresponds to the definition used in camera_calibration, in which the
origin of a pixel lies in the center of the pixel. For projective 2D transformations that are determined with
proj_match_points_ransac the rows and columns must be exchanged and a translation of (0.5, 0.5) must
be applied. Hence, instead of H12 = K2 R12 K−1
1 the following equations hold in HALCON:
0 1 0.5 0 1 −0.5
H12 = 1 0 0.5 K2 R12 K−1
1
1 0 −0.5
0 0 1 0 0 1
and
0 1 −0.5 0 1 0.5
K2 R12 K−1
1 = 1 0 −0.5 H12 1 0 0.5 .
0 0 1 0 0 1
From the above equation, constraints on the camera parameters can be derived in two ways. First, the rotation can
be eliminated from the above equation, leading to equations that relate the camera matrices with the projective 2D
transformation between the two images. Let Hij be the projective transformation from image i to image j. Then,
Kj K>
j = Hij Ki K> >
i Hij
K−> −1
j Kj = H−> −> −1 −1
ij Ki Ki Hij
From the second equation, linear constraints on the camera parameters can be derived. This method is used for
EstimationMethod = ’linear’. Here, all source images i given by MappingSource and all destination
images j given by MappingDest are used to compute constraints on the camera parameters. After the camera
parameters have been determined from these constraints, the rotation of the camera in the respective images can
be determined based on the equation Rij = K−1 j Hij Ki and by constructing a chain of transformations from the
reference image ReferenceImage to the respective image. From the first equation above, a nonlinear method
to determine the camera parameters can be derived by minimizing the following error:
X 2
E= Kj K> > >
j − Hij Ki Ki Hij F
(i,j)∈{(s,d)}
Here, analogously to the linear method, {(s, d)} is the set of overlapping images specified by MappingSource
and MappingDest. This method is used for EstimationMethod = ’nonlinear’. To start the minimization,
the camera parameters are initialized with the results of the linear method. These two methods are very fast and
return acceptable results if the projective 2D transformations Hij are sufficiently accurate. For this, it is essential
that the images do not have radial distortions. It can also be seen that in the above two methods the camera
parameters are determined independently from the rotation parameters, and consequently the possible constraints
are not fully exploited. In particular, it can be seen that it is not enforced that the projections of the same 3D
point lie close to each other in all images. Therefore, stationary_camera_self_calibration offers
a complete bundle adjustment as a third method (EstimationMethod = ’gold_standard’). Here, the camera
parameters and rotations as well as the directions in 3D corresponding to the image points (denoted by the vectors
X above), are determined in a single optimization by minimizing the following error:
n m
!
X X 2 1 2 2
E= kxij − Ki Ri Xj k + 2 (ui + vi )
i=1 j=1
σ
In this equation, only the terms for which the reconstructed direction Xj is visible in image i are taken into account.
The starting values for the parameters in the bundle adjustment are derived from the results of the nonlinear method.
Because of the high complexity of the minimization the bundle adjustment requires a significantly longer execution
time than the two simpler methods. Nevertheless, because the bundle adjustment results in significantly better
results, it should be preferred.
In each of the three methods the camera parameters that should be computed can be specified. The remaining
parameters are set to a constant value. Which parameters should be computed is determined with the parameter
CameraModel which contains a tuple of values. CameraModel must always contain the value ’focus’ that
specifies that the focal length f is computed. If CameraModel contains the value ’principal_point’ the principal
point (u, v) of the camera is computed. If not, the principal point is set to (ImageWidth/2, ImageHeight/2).
If CameraModel contains the value ’aspect’ the aspect ratio a of the pixels is determined, otherwise it is set to
1. If CameraModel contains the value ’skew’ the skew of the image axes is determined, otherwise it is set to
0. Only the following combinations of the parameters are allowed: ’focus’, [’focus’, ’principal_point’], [’focus’,
’aspect’], [’focus’, ’principal_point’, ’aspect’], and [’focus’, ’principal_point’, ’aspect’, ’skew’].
Additionally, it is possible to determine the parameter Kappa, which models radial lens distortions, if
EstimationMethod = ’gold_standard’ has been selected. In this case, ’kappa’ can also be included in the
parameter CameraModel. Kappa corresponds to the radial distortion parameter κ of the division model for lens
distortions (see camera_calibration).
When using EstimationMethod = ’gold_standard’ to determine the principal point, it is possible to penalize
estimations far away from the image center. This can be done by adding a sigma to the parameter ’principal_point:
0.5’. If no sigma is given the penalty term in the above equation for calculating the error is omitted.
The parameter FixedCameraParams determines whether the camera parameters can change in each im-
age or whether they should be assumed constant for all images. To calibrate a camera so that it can
later be used for measuring with the calibrated camera, only FixedCameraParams = ’true’ is use-
ful. The mode FixedCameraParams = ’false’ is mainly useful to compute spherical mosaics with
gen_spherical_mosaic if the camera zoomed or if the focus changed significantly when the mosaic images
were taken. If a mosaic with constant camera parameters should be computed, of course FixedCameraParams
= ’true’ should be used. It should be noted that for FixedCameraParams = ’false’ the camera calibration
problem is determined very badly, especially for long focal lengths. In these cases, often only the focal length can
be determined. Therefore, it may be necessary to use CameraModel = ’focus’ or to constrain the position of the
principal point by using a small Sigma for the penalty term for the principal point.
The number of images that are used for the calibration is passed in NumImages. Based on the number of images,
several constraints for the camera model must be observed. If only two images are used, even under the assumption
of constant parameters not all camera parameters can be determined. In this case, the skew of the image axes should
be set to 0 by not adding ’skew’ to CameraModel. If FixedCameraParams = ’false’ is used, the full set of
camera parameters can never be determined, no matter how many images are used. In this case, the skew should be
set to 0 as well. Furthermore, it should be noted that the aspect ratio can only be determined accurately if at least
one image is rotated around the optical axis (the z axis of the camera coordinate system) with respect to the other
images. If this is not the case the computation of the aspect ratio should be suppressed by not adding ’aspect’ to
CameraModel.
As described above, to calibrate the camera it is necessary that the projective transformation for each overlapping
image pair is determined with proj_match_points_ransac. For example, for a 2×2 block of images in the
following layout
HALCON 24.11.1.0
494 CHAPTER 6 CALIBRATION
1 2
3 4
the following projective transformations should be determined, assuming that all images overlap each other: 17→2,
17→3, 17→4, 27→3, 27→4 and 37→4. The indices of the images that determine the respective transformation are
given by MappingSource and MappingDest. The indices are start at 1. Consequently, in the above example
MappingSource = [1,1,1,2,2,3] and MappingDest = [2,3,4,3,4,4] must be used. The number of images
in the mosaic is given by NumImages. It is used to check whether each image can be reached by a chain of
transformations. The index of the reference image is given by ReferenceImage. On output, this image has the
identity matrix as its transformation matrix.
The 3 × 3 projective transformation matrices that correspond to the image pairs are passed in
HomMatrices2D. Additionally, the coordinates of the matched point pairs in the image pairs must
be passed in Rows1, Cols1, Rows2, and Cols2. They can be determined from the output of
proj_match_points_ransac with tuple_select or with the HDevelop function subset. To enable
stationary_camera_self_calibration to determine which point pair belongs to which image pair,
NumCorrespondences must contain the number of found point matches for each image pair.
The computed camera matrices Ki are returned in CameraMatrices as 3 × 3 matrices. For
FixedCameraParams = ’false’, NumImages matrices are returned. Since for FixedCameraParams =
’true’ all camera matrices are identical, a single camera matrix is returned in this case. The computed rotations Ri
are returned in RotationMatrices as 3 × 3 matrices. RotationMatrices always contains NumImages
matrices.
If EstimationMethod = ’gold_standard’ is used, (X, Y, Z) contains the reconstructed directions Xj . In ad-
dition, Error contains the average projection error of the reconstructed directions. This can be used to check
whether the optimization has converged to useful values.
If the computed camera parameters are used to project 3D points or 3D directions into the image i the respective
camera matrix should be multiplied with the corresponding rotation matrix (with hom_mat2d_compose).
Parameters
. NumImages (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of different images that are used for the calibration.
Restriction: NumImages >= 2
. ImageWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extent.x ; integer
Width of the images from which the points were extracted.
Restriction: ImageWidth > 0
. ImageHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .extent.y ; integer
Height of the images from which the points were extracted.
Restriction: ImageHeight > 0
. ReferenceImage (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the reference image.
. MappingSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of the source images of the transformations.
. MappingDest (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Indices of the target images of the transformations.
. HomMatrices2D (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hom_mat2d-array ; real
Array of 3 × 3 projective transformation matrices.
. Rows1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Row coordinates of corresponding points in the respective source images.
. Cols1 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Column coordinates of corresponding points in the respective source images.
. Rows2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.y-array ; real / integer
Row coordinates of corresponding points in the respective destination images.
. Cols2 (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point.x-array ; real / integer
Column coordinates of corresponding points in the respective destination images.
. NumCorrespondences (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of point correspondences in the respective image pair.
* Assume that Images contains four images in the layout given in the
* above description. Then the following example performs the camera
* self-calibration using these four images.
From := [1,1,1,2,2,3]
To := [2,3,4,3,4,4]
HomMatrices2D := []
Rows1 := []
Cols1 := []
Rows2 := []
Cols2 := []
NumMatches := []
for J := 0 to |From|-1 by 1
select_obj (Images, ImageF, From[J])
select_obj (Images, ImageT, To[J])
points_foerstner (ImageF, 1, 2, 3, 100, 0.1, 'gauss', 'true', \
RowsF, ColsF, _, _, _, _, _, _, _, _)
points_foerstner (ImageT, 1, 2, 3, 100, 0.1, 'gauss', 'true', \
RowsT, ColsT, _, _, _, _, _, _, _, _)
proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT, \
'ncc', 10, 0, 0, 480, 640, 0, 0.5, \
'gold_standard', 2, 42, HomMat2D, \
Points1, Points2)
HomMatrices2D := [HomMatrices2D,HomMat2D]
Rows1 := [Rows1,subset(RowsF,Points1)]
Cols1 := [Cols1,subset(ColsF,Points1)]
Rows2 := [Rows2,subset(RowsT,Points2)]
Cols2 := [Cols2,subset(ColsT,Points2)]
NumMatches := [NumMatches,|Points1|]
endfor
HALCON 24.11.1.0
496 CHAPTER 6 CALIBRATION
Result
If the parameters are valid, the operator stationary_camera_self_calibration returns the value 2
(H_MSG_TRUE). If necessary an exception is raised.
Execution Information
Classification
add_class_train_data_gmm ( : : GMMHandle,
ClassTrainDataHandle : )
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_gmm, create_class_train_data
Possible Successors
get_sample_class_gmm
Alternatives
add_sample_class_gmm
See also
create_class_gmm
Module
Foundation
497
498 CHAPTER 7 CLASSIFICATION
• GMMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm, write_samples_class_gmm
Alternatives
read_samples_class_gmm, add_samples_image_class_gmm
See also
clear_samples_class_gmm, get_sample_num_class_gmm, get_sample_class_gmm
Module
Foundation
HALCON 24.11.1.0
500 CHAPTER 7 CLASSIFICATION
clear_class_gmm ( : : GMMHandle : )
• GMMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
classify_class_gmm, evaluate_class_gmm
See also
create_class_gmm, read_class_gmm, write_class_gmm, train_class_gmm
Module
Foundation
clear_samples_class_gmm ( : : GMMHandle : )
clear_samples_class_gmm clears all training samples that have been stored in the Gaussian Mixture
Model (GMM) GMMHandle. clear_samples_class_gmm should only be used if the GMM is trained
in the same process that uses the GMM for evaluation with evaluate_class_gmm or for classification
with classify_class_gmm. In this case, the memory required for the training samples can be freed
with clear_samples_class_gmm, and hence memory can be saved. In the normal usage, in which the
GMM is trained offline and written to a file with write_class_gmm, it is typically unnecessary to call
clear_samples_class_gmm because write_class_gmm does not save the training samples, and hence
the online process, which reads the GMM with read_class_gmm, requires no memory for the training samples.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .class_gmm(-array) ; handle
GMM handle.
Result
If the parameters are valid, the operator clear_samples_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
exactly one parameter: The parameter determines the exact number of centers to be used for all classes.
exactly two parameters: The first parameter determines the minimum number of centers, the second determines
the maximum number of centers for all classes.
exactly 2 · N umClasses parameters: Alternatingly every first parameter determines the minimum number of
centers per class and every second parameters determines the maximum number of centers per class.
When upper and lower bounds are specified, the optimum number of centers will be determined with the help of
the Minimum Message Length Criterion (MML). In general, we recommend to start the training with (too) many
centers as maximum and the expected number of centers as minimum.
Each center is described by the parameters center mj , covariance matrix Cj , and mixing coefficient Pj . These pa-
rameters are calculated from the training data by means of the Expectation Maximization (EM) algorithm. A GMM
HALCON 24.11.1.0
502 CHAPTER 7 CLASSIFICATION
can approximate an arbitrary probability density, provided that enough centers are being used. The covariance ma-
trices Cj have the dimensions NumDim · NumDim (NumComponents · NumComponents if preprocessing is
used) and are symmetric. Further constraints can be given by CovarType:
For CovarType = ’spherical’, Cj is a scalar multiple of the identity matrix Cj = s2j I. The center density
function p(x|j) is
2
1 kx − mj k
p(x|j) = exp(− )
2
(2πsj )d/2 2s2j
d
1 X (xi − mj,i )2
p(x|j) = exp(− )
d 2s2j,i
s2j,i )d/2
Q
(2π i=1
i=1
For CovarType = ’full’, Cj is a positive definite matrix. The center density function p(x|j) is
1 1
p(x|j) = exp(− (x − mj )T C−1 (x − mj ))
1
(2π)d/2 |C j| 2 2
The complexity of the calculations increases from CovarType = ’spherical’ over CovarType = ’diag’ to
CovarType = ’full’. At the same time the flexibility of the centers increases. In general, ’spherical’ therefore
needs higher values for NumCenters than ’full’.
The procedure to use GMM is as follows: First, a GMM is created by create_class_gmm. Then,
training vectors are added by add_sample_class_gmm, afterwards they can be written to disk with
write_samples_class_gmm. With train_class_gmm the classifier center parameters (defined above)
are determined. Furthermore, they can be saved with write_class_gmm for later classifications.
From the mixing probabilities Pj and the center density function p(x|j), the probability density function p(x) can
be calculated by:
ncomp
X
p(x) = P (j)p(x|j)
j=1
The probability density function p(x) can be evaluated with evaluate_class_gmm for a feature vector x.
classify_class_gmm sorts the p(x) and therefore discovers the most probable class of the feature vector.
The parameters Preprocessing and NumComponents can be used to preprocess the training data and reduce
its dimensions. These parameters are explained in the description of the operator create_class_mlp.
create_class_gmm initializes the coordinates of the centers with random numbers. To ensure that the results of
training the classifier with train_class_gmm are reproducible, the seed value of the random number generator
is passed in RandSeed.
Parameters
. NumDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of dimensions of the feature space.
Default: 3
Suggested values: NumDim ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumDim >= 1
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes of the GMM.
Default: 5
Suggested values: NumClasses ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction: NumClasses >= 1
Result
If the parameters are valid, the operator create_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information
HALCON 24.11.1.0
504 CHAPTER 7 CLASSIFICATION
See also
clear_class_gmm, train_class_gmm, classify_class_gmm, evaluate_class_gmm,
classify_image_class_gmm
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Mario A.T. Figueiredo: “Unsupervised Learning of Finite Mixture Models”; IEEE Transactions on Pattern Analy-
sis and Machine Intelligence, Vol. 24, No. 3; March 2002.
Module
Foundation
and returned for each class in ClassProb. The formulas for the calculation of the center density function p(x|j)
are described with create_class_gmm.
The probability density of the feature vector is computed as a sum of the posterior class probabilities
nclasses
X
p(x) = P r(i)p(i|x)
i=1
and is returned in Density. Here, P r(i) are the prior classes probabilities as computed by train_class_gmm.
Density can be used for novelty detection, i.e., to reject feature vectors that do not belong to any of the trained
classes. However, since Density depends on the scaling of the feature vectors and since Density is a probabil-
ity density, and consequently does not need to lie between 0 and 1, the novelty detection can typically be performed
more easily with KSigmaProb (see below).
A k-sigma error ellipsoid is defined as a locus of points for which
(x − µ)T C −1 (x − µ) = k 2
In the one dimensional case this is the interval [µ − kσ, µ + kσ]. For any 1D Gaussian distribution, it is true that
approximately 68% of the occurrences of the random variable are within this range for k = 1, approximately 95%
for k = 2, approximately 99% for k = 3, etc. This probability is called k-sigma probability and is denoted by
P [k]. P [k] can be computed numerically for univariate as well as for multivariate Gaussian distributions, where it
should be noted that for the same values of k, P (N ) [k] > P (N +1) [k] (here N and (N+1) denote dimensions). For
Gaussian mixture models the k-sigma probability is computed as:
ncomp
X
PGM M [x] = P (j)Pj [kj ]
j=1
where
kj2 = (x − µj )T Cj−1 (x − µj )
. PGM M [k] are weighted with the class priors and then normalized. The maximum value of all classes is returned
in KSigmaProb, such that
1
KSigmaProb = max (P r(i)PGM M [x])
Prmax
KSigmaProb can be used for novelty detection, as it indicates how well a feature vector fits into the distribution
of the class it is assigned to. Typically, feature vectors having values below 0.0001 should be rejected. Note that
the rejection threshold defined by the parameter RejectionThreshold in classify_image_class_gmm
refers to the KSigmaProb values.
Before calling evaluate_class_gmm, the GMM must be trained with train_class_gmm.
The position of the maximum value of ClassProb is usually interpreted as the class of the feature vector and the
corresponding value as the probability of the class. In this case, classify_class_gmm should be used instead
of evaluate_class_gmm, because classify_class_gmm directly returns the class and corresponding
probability.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector.
. ClassProb (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
A-posteriori probability of the classes.
. Density (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .real ; real
Probability density of the feature vector.
HALCON 24.11.1.0
506 CHAPTER 7 CLASSIFICATION
Module
Foundation
get_prep_info_class_gmm ( : : GMMHandle,
Preprocessing : InformationCont, CumInformationCont )
HALCON 24.11.1.0
508 CHAPTER 7 CLASSIFICATION
is computed solely based on the training data, independent of any error rate on the training data. The information
content is computed for all relevant components of the transformed feature vectors (NumComponents for ’princi-
pal_components’ and ’canonical_variates’, see create_class_gmm), and is returned in InformationCont
as a number between 0 and 1. To convert the information content into a percentage, it simply needs to be mul-
tiplied by 100. The cumulative information content of the first n components is returned in the n-th compo-
nent of CumInformationCont, i.e., CumInformationCont contains the sums of the first n elements of
InformationCont. To use get_prep_info_class_gmm, a sufficient number of samples must be added
to the GMM given by GMMHandle by using add_sample_class_gmm or read_samples_class_gmm.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_gmm. The call to get_prep_info_class_gmm al-
ready requires the creation of a GMM, and hence the setting of NumComponents in create_class_gmm
to an initial value. However, if get_prep_info_class_gmm is called, it is typically not known how many
components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-step
approach should typically be used to select NumComponents: In a first step, a GMM with the maximum num-
ber for NumComponents is created (NumComponents for ’principal_components’ and ’canonical_variates’).
Then, the training samples are added to the GMM and are saved in a file using write_samples_class_gmm.
Subsequently, get_prep_info_class_gmm is used to determine the information content of the compo-
nents, and with this NumComponents. After this, a new GMM with the desired number of components is
created, and the training samples are read with read_samples_class_gmm. Finally, the GMM is trained with
train_class_gmm.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’principal_components’
List of values: Preprocessing ∈ {’principal_components’, ’canonical_variates’}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Cumulative information content of the transformed feature vectors.
Example
Result
If the parameters are valid, the operator get_prep_info_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
get_prep_info_class_gmm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Execution Information
Return a training sample from the training data of a Gaussian Mixture Models (GMM).
get_sample_class_gmm reads out a training sample from the Gaussian Mixture Model (GMM) given
by GMMHandle that was stored with add_sample_class_gmm or add_samples_image_class_gmm.
The index of the sample is specified with NumSample. The index is counted from 0, i.e., NumSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_gmm. The training sample is returned in Features and ClassID. Features
is a feature vector of length NumDim, while ClassID is its class (see add_sample_class_gmm and
create_class_gmm).
get_sample_class_gmm can, for example, be used to reclassify the training data with
classify_class_gmm in order to determine which training samples, if any, are classified incorrectly.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. NumSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Class of the training sample.
Example
HALCON 24.11.1.0
510 CHAPTER 7 CLASSIFICATION
Result
If the parameters are valid, the operator get_sample_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
Return the number of training samples stored in the training data of a Gaussian Mixture Model (GMM).
get_sample_num_class_gmm returns in NumSamples the number of training samples that are stored in the
Gaussian Mixture Model (GMM) given by GMMHandle. get_sample_num_class_gmm should be called
before the individual training samples are read out with get_sample_class_gmm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_gmm).
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If the parameters are valid, the operator get_sample_num_class_gmm returns the value 2 (H_MSG_TRUE).
If necessary an exception is raised.
Execution Information
Possible Predecessors
add_sample_class_gmm, add_samples_image_class_gmm, read_samples_class_gmm
Possible Successors
get_sample_class_gmm
See also
create_class_gmm
Module
Foundation
HALCON 24.11.1.0
512 CHAPTER 7 CLASSIFICATION
It should be noted that the training samples must have the correct dimensionality. The feature vectors stored in
FileName must have the lengths NumDim that was specified with create_class_gmm, and enough classes
must have been created in create_class_gmm. If this is not the case, an error message is returned.
It is possible to read files of samples that were written with write_samples_class_svm or
write_samples_class_mlp.
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.read ; string
File name.
Result
If the parameters are valid, the operator read_samples_class_gmm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_gmm
Possible Successors
train_class_gmm
Alternatives
add_sample_class_gmm
See also
write_samples_class_gmm, write_samples_class_mlp, clear_samples_class_gmm
Module
Foundation
Selects an optimal combination from a set of features to classify the provided data.
select_feature_set_gmm selects an optimal subset from a set of features to solve a given clas-
sification problem. The classification problem has to be specified with annotated training data in
ClassTrainDataHandle and will be classified by a Gaussian Mixture Model. Details of the properties of
this classifier can be found in create_class_gmm.
The result of the operator is a trained classifier that is returned in GMMHandle. Additionally, the list of indices or
names of the selected features is returned in SelectedFeatureIndices. To use this classifier, calculate for
new input data all features mentioned in SelectedFeatureIndices and pass them to the classifier.
A possible application of this operator can be a comparison of different parameter sets for certain feature extraction
techniques. Another application is to search for a feature that is discriminating between different classes.
To define the features that should be selected from ClassTrainDataHandle, the dimensions
of the feature vectors in ClassTrainDataHandle can be grouped into subfeatures by calling
A more exact description of those parameters can be found in create_class_gmm and train_class_gmm.
Attention
This operator may take considerable time, depending on the size of the data set in the training file, and the number
of features.
Please note, that this operator should not be called, if only a small set of training data is available. Due to the risk of
overfitting the operator select_feature_set_gmm may deliver a classifier with a very high score. However,
the classifier may perform poorly when tested.
Parameters
HALCON 24.11.1.0
514 CHAPTER 7 CLASSIFICATION
Result
If the parameters are valid, the operator select_feature_set_gmm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
create_class_train_data, add_sample_class_train_data,
set_feature_lengths_class_train_data
Possible Successors
classify_class_gmm
Alternatives
select_feature_set_mlp, select_feature_set_knn, select_feature_set_svm
See also
create_class_gmm, gray_features, region_features
Module
Foundation
HALCON 24.11.1.0
516 CHAPTER 7 CLASSIFICATION
train_class_gmm trains the Gaussian Mixture Model (GMM) referenced by GMMHandle. Before the
GMM can be trained, all training samples to be used for the training must be stored in the GMM using
add_sample_class_gmm, add_samples_image_class_gmm, or read_samples_class_gmm. Af-
ter the training, new training samples can be added to the GMM and the GMM can be trained again.
During the training, the error that results from the GMM applied to the training vectors will be minimized with the
expectation maximization (EM) algorithm.
MaxIter specifies the maximum number of iterations per class for the EM algorithm. In practice, values between
20 and 200 should be sufficient for most problems. Threshold specifies a threshold for the relative changes
of the error. If the relative change in error exceeds the threshold after MaxIter iterations, the algorithm will be
canceled for this class. Because the algorithm starts with the maximum specified number of centers (parameter
NumCenters in create_class_gmm), in case of a premature termination the number of centers and the error
for this class will not be optimal. In this case, a new training with different parameters (e.g., another value for
RandSeed in create_class_gmm) can be tried.
ClassPriors specifies the method of calculation of the class priors in GMM. If ’training’ is specified, the
priors of the classes are taken from the proportion of the corresponding sample data during training. If ’uniform’
is specified, the priors are set equal to 1/NumClasses for all classes.
Regularize is used to regularize (nearly) singular covariance matrices during the training. A covariance matrix
might collapse to singularity if it is trained with linearly dependent data. To avoid this, a small value specified by
Regularize is added to each main diagonal element of the covariance matrix, which prevents this element from
becoming smaller than Regularize. A recommended value for Regularize is 0.0001. If Regularize is
set to 0.0, no regularization is performed.
The centers are initially randomly distributed. In individual cases, relatively high errors will result from the al-
gorithm because the initial random values determined by RandSeed in create_class_gmm lead to local
minima. In this case, a new GMM with a different value for RandSeed should be generated to test whether a
significantly smaller error can be obtained.
It should be noted that, depending on the number of centers, the type of covariance matrix, and the number of
training samples, the training can take from a few seconds to several hours.
On output, train_class_gmm returns in Centers the number of centers per class that have been
found to be optimal by the EM algorithm. These values can be used as a reference in NumCenters (in
create_class_gmm) for future GMMs. If the number of centers found by training a new GMM on integer
training data is unexpectedly high, this might be corrected by adding a Randomize noise to the training data in
add_sample_class_gmm. Iter contains the number of performed iterations per class. If a value in Iter
equals MaxIter, the training algorithm has been terminated prematurely (see above).
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. MaxIter (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum number of iterations of the expectation maximization algorithm
Default: 100
Suggested values: MaxIter ∈ {10, 20, 30, 50, 100, 200}
. Threshold (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for relative change of the error for the expectation maximization algorithm to terminate.
Default: 0.001
Suggested values: Threshold ∈ {0.001, 0.0001}
Restriction: Threshold >= 0.0 && Threshold <= 1.0
. ClassPriors (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode to determine the a-priori probabilities of the classes
Default: ’training’
List of values: ClassPriors ∈ {’training’, ’uniform’}
. Regularize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Regularization value for preventing covariance matrix singularity.
Default: 0.0001
Restriction: Regularize >= 0.0 && Regularize < 1.0
. Centers (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer-array ; integer
Number of found centers per class
Result
If the parameters are valid, the operator train_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information
HALCON 24.11.1.0
518 CHAPTER 7 CLASSIFICATION
Parameters
. GMMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_gmm ; handle
GMM handle.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
File name.
File extension: .ggc
Result
If the parameters are valid, the operator write_class_gmm returns the value 2 (H_MSG_TRUE). If necessary
an exception is raised.
Execution Information
Possible Predecessors
add_sample_class_gmm
Possible Successors
clear_samples_class_gmm
See also
create_class_gmm, read_samples_class_gmm, read_samples_class_mlp,
write_samples_class_mlp
Module
Foundation
add_class_train_data_knn ( : : KNNHandle,
ClassTrainDataHandle : )
HALCON 24.11.1.0
520 CHAPTER 7 CLASSIFICATION
add_sample_class_knn adds a feature vector to a k-nearest neighbors (k-NN) data structure. The length of
a feature vector was specified in create_class_knn by NumDim. A handle to a k-NN data structure has to be
specified in KNNHandle.
The feature vectors are collected in Features. The length of the input vector must be a multiple of NumDim.
Each feature vector needs a class which can be given by ClassID, if only one was specified, the class is used for
all vectors. The class is a natural number greater or equal to 0. If only one class is used, the class has to be 0. In
case the operator classify_image_class_knn will be used, all numbers starting from 0 to the number of
classes-1 should be used, since otherwise an empty region will be generated for each unused number.
It is allowed to add samples to an already trained k-NN classificator. The new data is only integrated after another
call to train_class_knn.
If the k-NN classifier has been trained with automatic feature normalization enabled, the supplied fea-
tures Features are interpreted as unnormalized and are normalized as it was defined by the last call to
train_class_knn. Please see train_class_knn for more information on normalization.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
List of features to add.
. ClassID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Class IDs of the features.
Result
If the parameters are valid, the operator add_sample_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
’classes_distance’: returns the nearest samples for each of maximally ’max_num_classes’ different classes, if they
have a representative in the nearest ’k’ neighbors. The results in Result are classes sorted by their minimal
distance in Rating. There is no efficient way to determine in a k-NN-tree the nearest neighbor for exactly
’max_num_classes’ classes.
’classes_frequency’: counts the occurrences of certain classes among the nearest ’k’ neighbors and returns the
occurring classes in Result sorted by their relative frequency that is returned in Rating. Again, maximally
’max_num_classes’ values are returned.
’classes_weighted_frequencies’: counts the occurrences of certain classes among the nearest ’k’ neighbors and
returns the occurring classes in Result sorted by their relative frequency weighted with the average distance
that is returned in Rating. Again, maximally ’max_num_classes’ values are returned.
’neighbors_distance’: returns the indices of the nearest ’k’ neighbors in Result and the distances in Rating.
The default behavior is ’classes_distance’ and returns the classes and distances.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
Features that should be classified.
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
The classification result, either class IDs or sample indices.
. Rating (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; real
A rating for the results. This value contains either a distance, a frequency or a weighted frequency.
Result
If the parameters are valid, the operator classify_class_knn returns the value 2 (H_MSG_TRUE). If neces-
sary, an exception is raised.
Execution Information
Possible Predecessors
train_class_knn, read_class_knn, set_params_class_knn
Possible Successors
clear_class_knn
See also
create_class_knn, read_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation
clear_class_knn ( : : KNNHandle : )
HALCON 24.11.1.0
522 CHAPTER 7 CLASSIFICATION
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
Result
If the parameters are valid, the operator clear_class_knn returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
train_class_knn, read_class_knn
See also
create_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation
Possible Predecessors
fread_serialized_item, receive_serialized_item, serialize_class_knn
Possible Successors
classify_class_knn
Alternatives
serialize_class_knn
See also
create_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation
HALCON 24.11.1.0
524 CHAPTER 7 CLASSIFICATION
get_params_class_knn ( : : KNNHandle,
GenParamName : GenParamValue )
’method’: Retrieve the currently selected method for determining the result of classify_class_knn. The re-
sult can be ’classes_distance’, ’classes_frequency’, ’classes_weighted_frequencies’ or ’neighbors_distance’.
’k’: The number of nearest neighbors that is considered to determine the results.
’max_num_classes’: The maximum number of classes that are returned. This parameter is ignored in case the
method ’neighbors_distance’ is selected.
’num_checks’: Defines the maximum number of runs through the trees.
’epsilon’: A parameter to lower the accuracy in the tree to gain speed.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
Names of the parameters that can be read from the k-NN classifier.
Default: [’method’,’k’]
List of values: GenParamName ∈ {’method’, ’num_checks’, ’epsilon’, ’k’}
Return a training sample from the training data of a k-nearest neighbors (k-NN) classifier.
get_sample_class_knn reads a training sample from the k-nearest neighbors (k-NN) classifier given by
KNNHandle that was added with add_sample_class_knn or read_class_knn. The index of the sample
is specified with IndexSample. The index is counted from 0, i.e., IndexSample must be a number between
0 and NumSamples −1, where NumSamples can be determined with get_sample_num_class_knn. The
training sample is returned in Features and ClassID. Features is a feature vector of length NumDim (see
create_class_knn), while ClassID is the class label, which is a number between 0 and the number of
classes.
Parameters
HALCON 24.11.1.0
526 CHAPTER 7 CLASSIFICATION
Possible Predecessors
add_sample_class_train_data
See also
create_class_knn
Module
Foundation
Return the number of training samples stored in the training data of a k-nearest neighbors (k-NN) classifier.
get_sample_num_class_knn returns in NumSamples the number of training samples that are stored in the
k-nearest neighbors (k-NN) classifier given by KNNHandle. get_sample_num_class_knn should be called
before the individual training samples are accessed with get_sample_class_knn.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If KNNHandle is valid, the operator get_sample_num_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Result
read_class_knn returns 2 (H_MSG_TRUE). An exception is raised if it was not possible to open the file
FileName or the file has the wrong format.
Execution Information
Selects an optimal subset from a set of features to solve a certain classification problem.
select_feature_set_knn selects an optimal subset from a set of features to solve a certain clas-
sification problem. The classification problem has to be specified with annotated training data in
ClassTrainDataHandle and will be classified by a a k-nearest neighbors classifier. Details of the proper-
ties of this classifier can be found in create_class_knn.
The result of the operator is a trained classifier that is returned in KNNHandle. Additionally, the list of indices or
names of the selected features is returned in SelectedFeatureIndices. To use this classifier, calculate for
new input data all features mentioned in SelectedFeatureIndices and pass them to the classifier.
A possible application of this operator can be a comparison of different parameter sets for certain feature extraction
techniques. Another application is to search for a property that is discriminating between different classes of parts
or classes of errors.
To define the features that should be selected from ClassTrainDataHandle, the dimensions
of the feature vectors in ClassTrainDataHandle can be grouped into subfeatures by calling
set_feature_lengths_class_train_data. A subfeature can contain several subsequent elements of
a feature vector. The operator decides for each of these subfeatures, if it is better to use it for the classification or
leave it out.
The indices of the selected subfeatures are returned in SelectedFeatureIndices. If names were set
in set_feature_lengths_class_train_data, these names are returned instead of the indices. If
set_feature_lengths_class_train_data was not called for ClassTrainDataHandle before,
each element of the feature vector is considered as a subfeature.
The selection method SelectionMethod is either a greedy search ’greedy’ (iteratively add the feature with
highest gain) or the dynamically oscillating search ’greedy_oscillating’ (add the feature with highest gain and test
then if any of the already added features can be left out without great loss). The method ’greedy’ is generally
preferable, since it is faster. Only in cases when the subfeatures are low-dimensional or redundant, the method
’greedy_oscillating’ should be chosen.
The optimization criterion is the classification rate of a two-fold cross-validation of the training data. The best
achieved value is returned in Score.
The k-NN classifier can be parameterized using the following values in GenParamName and GenParamValue:
HALCON 24.11.1.0
528 CHAPTER 7 CLASSIFICATION
’num_neighbors’: The number of minimally evaluated nodes, increase this value for high dimensional data.
Suggested values: ’1’, ’2’, ’5’, ’10’
Default: ’1’
’num_trees’: Number of search trees in the k-NN classifier
Suggested values: ’1’, ’4’, ’10’
Default: ’4’
Attention
This operator may take considerable time, depending on the size of the data set in the training file, and the number
of features.
Please note, that this operator should not be called, if only a small set of training data is available. Due to the risk of
overfitting the operator select_feature_set_knn may deliver a classifier with a very high score. However,
the classifier may perform poorly when tested.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. SelectionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to perform the selection.
Default: ’greedy’
List of values: SelectionMethod ∈ {’greedy’, ’greedy_oscillating’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of generic parameters to configure the selection process and the classifier.
Default: []
List of values: GenParamName ∈ {’num_neighbors’, ’num_trees’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of generic parameters to configure the selection process and the classifier.
Default: []
Suggested values: GenParamValue ∈ {1, 2, 3}
. KNNHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
A trained k-NN classifier using only the selected features.
. SelectedFeatureIndices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
The selected feature set, contains indices or names.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The achieved score using two-fold cross-validation.
Example
* ...
* Select the better feature with the k-NN classifier
select_feature_set_knn (ClassTrainDataHandle, 'greedy', [], [], KNNHandle,\
SelectedFeatureKNN, Score)
* Use the classifier
* ...
Result
If the parameters are valid, the operator select_feature_set_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
530 CHAPTER 7 CLASSIFICATION
Possible Predecessors
train_class_knn, read_class_knn
Possible Successors
fwrite_serialized_item, send_serialized_item, deserialize_class_knn
See also
create_class_knn, read_class_knn, deserialize_class_knn
References
Marius Muja, David G. Lowe: “Fast Approximate Nearest Neighbors with Automatic Algorithm Configuration”;
International Conference on Computer Vision Theory and Applications (VISAPP 09); 2009.
Module
Foundation
Result
If the parameters are valid, the operator set_params_class_knn returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
532 CHAPTER 7 CLASSIFICATION
clear_class_lut ( : : ClassLUTHandle : )
Create a look-up table using a gaussian mixture model to classify byte images.
create_class_lut_gmm generates a look-up table (LUT) ClassLUTHandle using the data of a trained
gaussian mixture model (GMM) GMMHandle to classify multi-channel byte images. By using this GMM-based
LUT classifier the operator classify_image_class_gmm of the subsequent classification can be replaced by
the operator classify_image_class_lut. The classification gets a major speed-up, because the estimation
of the class in every image point is no longer necessary since every possible response of the GMM is stored in the
LUT. For the generation of the LUT, the parameters NumDim, Preprocessing, and NumComponents defined
in the earlier called operator create_class_gmm are important. In NumDim, the number of image channels
the images must have to be classified is defined. By using the Preprocessing (see create_class_gmm)
HALCON 24.11.1.0
534 CHAPTER 7 CLASSIFICATION
the number of image channels can be transformed to NumComponents. NumComponents defines the length
of the feature vector, which the classifier classify_class_gmm handles internally. Because of perfor-
mance and disk space, the LUT is restricted to be maximal 3-dimensional. Since it replaces the operator
classify_class_gmm, NumComponents ≤ 3 must hold. If there is no preprocessing that reduces the num-
ber of image channels (NumDim = NumComponents), all possible pixel values, which can occur in a byte
image, are classified with classify_class_gmm. The returned classes are stored in the LUT. If there is
a preprocessing that reduces the number of image channels (NumDim > NumComponents), the preprocess-
ing parameters of the GMM are stored in a separate structure of the LUT. To create the LUT, all transformed
pixel values are classified with classify_class_gmm. The returned classes are stored in the LUT. Because
of the discretization of the LUT, the accuracy of the LUT classifier could become lower than the accuracy of
classify_image_class_gmm. With ’bit_depth’ and ’class_selection’ the accuracy of the classification, the
required storage, and the runtime needed to create the LUT can be controlled.
The following parameters of the GMM-based LUT classifier can be set with GenParamName and
GenParamValue:
’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result
in a lower accuracy for the classification. One of these clusters contains 2N umComponents·(8−bit_depth)
pixel combinations, where NumComponents denotes the dimension of the LUT, which is specified in
create_class_gmm. For example, for ’bit_depth’ = 7, NumComponents = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most 2N umComponents·bit_depth+2
bytes of storage. For example, for NumComponents = 3, ’bit_depth’ = 8 and NumClasses < 16 (spec-
ified in create_class_gmm), the LUT requires 8 MB of storage with internal storage optimization. If
NumClasses = 1, the LUT requires only 2 MB of storage by using the full bit depth of the LUT. The
runtime for the classification in classify_image_class_lut becomes minimal if the LUT fits into
the cache. Suggested values: 6,7,8
Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’class_selection’: Method for the class selection for the LUT. Can be modified to control the accuracy and the
runtime needed to create the LUT classifier. The value in ’class_selection’ is ignored if the bit depth of the
LUT is maximal, thus ’bit_depth’ = 8 holds. If the bit depth of the LUT is smaller (’bit_depth’ < 8), the
classes of multiple pixel combinations will be mapped to the same LUT entry. One of these clusters contains
2N umComponents·(8−bit_depth) pixel combinations, where NumComponents denotes the dimension of the
LUT, which is specified in create_class_gmm. By choosing ’class_selection’ = ’best’, the class that
appears most often in the cluster is stored in the LUT. For ’class_selection’ = ’fast’, only one pixel of the
cluster, i.e., the pixel with the smallest value (component-wise), is classified. The returned class is stored in
the LUT. In this case, the accuracy of the subsequent classification could become lower. On the other hand,
the runtime needed to create the LUT can be reduced, which is proportional to the maximal needed storage
of the LUT, which is defined with 2N umComponents·bit_depth+2 .
List of values: ’fast’, ’best’
Default: ’fast’
’rejection_threshold’: Threshold for the rejection of uncertain classified points of the GMM. The param-
eter represents a threshold on the K-sigma probability measure returned by the classification (see
classify_class_gmm and evaluate_class_gmm). All pixels having a probability below ’rejec-
tion_threshold’ are not assigned to any class.
Default: 0.0001
Restriction: ’rejection_threshold’ ≥ 0, ’rejection_threshold’ ≤ 1.
Parameters
Create a look-up table using a k-nearest neighbors classifier (k-NN) to classify byte images.
create_class_lut_knn generates a look-up table (LUT) ClassLUTHandle using the data of a trained k-
nearest neighbors classifier (k-NN) KNNHandle to classify multi-channel byte images. By using this k-NN-based
LUT classifier, the operator classify_image_class_knn of the subsequent classification can be replaced
by the operator classify_image_class_lut. The classification is speed up considerably, because the esti-
mation of the class in every image point is no longer necessary since every possible response of the k-NN is stored
in the LUT. For the generation of the LUT, the parameter NumDim of called operator create_class_knn is
important. The number of image channels the images must have to be classified is defined in NumDim.
To create the LUT, all pixel values are classified with classify_class_knn. The returned classes are stored
in the LUT. Because of the discretization of the LUT, the accuracy of the LUT classifier could become lower than
the accuracy of classify_image_class_knn.
With ’bit_depth’ the accuracy of the classification, the required storage, and the runtime needed to create the LUT
can be controlled.
The following parameters of the k-NN-based LUT classifier can be set with GenParamName and
GenParamValue:
’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result in
a lower accuracy for the classification. One of these clusters contains
2N umDim·(8−bit_depth) pixel combinations, where NumDim denotes the dimension of the LUT, which is
specified in create_class_knn. For example, for ’bit_depth’ = 7, NumDim = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most
HALCON 24.11.1.0
536 CHAPTER 7 CLASSIFICATION
2N umDim·bit_depth+2 bytes of storage. For example, for NumDim = 3, ’bit_depth’ = 8 and number of classes
is smaller than 16, the LUT requires 8 MB of storage with internal storage optimization. The runtime for the
classification in classify_image_class_lut becomes minimal if the LUT fits into the cache.
Suggested values: 6,7,8
Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’rejection_threshold’: Threshold for the rejection of uncertain classified points of the k-NN. The parameter rep-
resents a threshold on the distance returned by the classification (see classify_class_knn). All pixels
having a distance over ’rejection_threshold’ are not assigned to any class.
Default: 5
Restriction: ’rejection_threshold’ ≥ 0.
Parameters
. KNNHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_knn ; handle
Handle of the k-NN classifier.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamName ∈ {’bit_depth’, ’rejection_threshold’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / integer / real
Values of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamValue ∈ {8, 7, 6, 0.5, 5, 10, 50}
. ClassLUTHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut ; handle
Handle of the LUT classifier.
Result
If the parameters are valid, the operator create_class_lut_knn returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
classifier the operator classify_image_class_mlp of the subsequent classification can be replaced by the
operator classify_image_class_lut. The classification gets a major speed-up, because the estimation of
the class in every image point is no longer necessary since every possible response of the MLP is stored in the LUT.
For the generation of the LUT, the parameters NumInput, Preprocessing, and NumComponents defined
in the earlier called operator create_class_mlp are important. In NumInput, the number of image channels
the images must have to be classified is defined. By using the Preprocessing (see create_class_mlp)
the number of image channels can be transformed to NumComponents. NumComponents defines the length
of the feature vector, which the classifier classify_class_mlp handles internally. Because of perfor-
mance and disk space, the LUT is restricted to be maximal 3-dimensional. Since it replaces the operator
classify_class_mlp, NumComponents ≤ 3 must hold. If there is no preprocessing that reduces the
number of image channels (NumInput = NumComponents), all possible pixel values, which can occur in a
byte image, are classified with classify_class_mlp. The returned classes are stored in the LUT. If there
is a preprocessing that reduces the number of image channels (NumInput > NumComponents), the prepro-
cessing parameters of the MLP are stored in a separate structure of the LUT. To create the LUT, all transformed
pixel values are classified with classify_class_mlp. The returned classes are stored in the LUT. Because
of the discretization of the LUT, the accuracy of the LUT classifier could become lower than the accuracy of
classify_image_class_mlp. With ’bit_depth’ and ’class_selection’ the accuracy of the classification, the
required storage, and the runtime needed to create the LUT can be controlled.
The following parameters of the MLP-based LUT classifier can be set with GenParamName and
GenParamValue:
’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result
in a lower accuracy for the classification. One of these clusters contains 2N umComponents·(8−bit_depth)
pixel combinations, where NumComponents denotes the dimension of the LUT, which is specified in
create_class_mlp. For example, for ’bit_depth’ = 7, NumComponents = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most 2N umComponents·bit_depth+2
bytes of storage. For example, for NumComponents = 3, ’bit_depth’ = 8 and NumOutput < 16 (spec-
ified in create_class_mlp), the LUT requires 8 MB of storage with internal storage optimization. If
NumOutput = 1, the LUT requires only 2 MB of storage by using the full bit depth of the LUT. The runtime
for the classification in classify_image_class_lut becomes minimal if the LUT fits into the cache.
Suggested values: 6,7,8 Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’class_selection’: Method for the class selection for the LUT. Can be modified to control the accuracy and the
runtime needed to create the LUT classifier. The value in ’class_selection’ is ignored if the bit depth of the
LUT is maximal, thus ’bit_depth’ = 8 holds. If the bit depth of the LUT is smaller (’bit_depth’ < 8), the
classes of multiple pixel combinations will be mapped to the same LUT entry. One of these clusters contains
2N umComponents·(8−bit_depth) pixel combinations, where NumComponents denotes the dimension of the
LUT, which is specified in create_class_mlp. By choosing ’class_selection’ = ’best’, the class that
appears most often in the cluster is stored in the LUT. For ’class_selection’ = ’fast’, only one pixel of the
cluster, i.e., the pixel with the smallest value (component-wise), is classified. The returned class is stored in
the LUT. In this case, the accuracy of the subsequent classification could become lower. On the other hand,
the runtime needed to create the LUT can be reduced, which is proportional to the maximal needed storage
of the LUT, which is defined with 2N umComponents·bit_depth+2 .
List of values: ’fast’, ’best’
Default: ’fast’,
’rejection_threshold’: Threshold for the rejection of uncertain classified points of the MLP. The parameter rep-
resents a threshold on the probability measure returned by the classification (see classify_class_mlp
and evaluate_class_mlp). All pixels having a probability below ’rejection_threshold’ are not assigned
to any class.
Default: 0.5
Restriction: ’rejection_threshold’ ≥ 0, ’rejection_threshold’ ≤ 1.
HALCON 24.11.1.0
538 CHAPTER 7 CLASSIFICATION
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamName ∈ {’bit_depth’, ’class_selection’, ’rejection_threshold’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.value-array ; string / integer / real
Values of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamValue ∈ {8, 7, 6, ’fast’, ’best’}
. ClassLUTHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut ; handle
Handle of the LUT classifier.
Result
If the parameters are valid, the operator create_class_lut_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
classify_class_svm. The returned classes are stored in the LUT. If there is a preprocessing that reduces
the number of image channels (NumFeatures > NumComponents), the preprocessing parameters of the SVM
are stored in a separate structure of the LUT. To create the LUT, all transformed pixel values are classified with
classify_class_svm. The returned classes are stored in the LUT. Because of the discretization of the LUT,
the accuracy of the LUT classifier could become lower than the accuracy of classify_image_class_svm.
With ’bit_depth’ and ’class_selection’ the accuracy of the classification, the required storage, and the runtime
needed to create the LUT can be controlled.
The following parameters of the SVM-based LUT classifier can be set with GenParamName and
GenParamValue:
’bit_depth’: Number of bits used from the pixels. It controls the storage requirement of the LUT classifier and is
bounded by the bit depth of the image (’bit_depth’ ≤ 8). If the bit depth of the LUT is smaller (’bit_depth’
< 8), the classes of multiple pixel combinations will be mapped to the same LUT entry, which can result
in a lower accuracy for the classification. One of these clusters contains 2N umComponents·(8−bit_depth)
pixel combinations, where NumComponents denotes the dimension of the LUT, which is specified in
create_class_svm. For example, for ’bit_depth’ = 7, NumComponents = 3, the classes of 8 pixel
combinations are mapped in the same LUT entry. The LUT requires at most 2N umComponents·bit_depth+2
bytes of storage. For example, for NumComponents = 3, ’bit_depth’ = 8 and NumClasses < 16 (spec-
ified in create_class_svm), the LUT requires 8 MB of storage with internal storage optimization. If
NumClasses = 1, the LUT requires only 2 MB of storage by using the full bit depth of the LUT. The
runtime for the classification in classify_image_class_lut becomes minimal if the LUT fits into
the cache.
Suggested values: 6,7,8
Default: 8
Restriction: ’bit_depth’ ≥ 1, ’bit_depth’ ≤ 8.
’class_selection’: Method for the class selection for the LUT. Can be modified to control the accuracy and the
runtime needed to create the LUT classifier. The value in ’class_selection’ is ignored if the bit depth of the
LUT is maximal, thus ’bit_depth’ = 8 holds. If the bit depth of the LUT is smaller (’bit_depth’ < 8), the
classes of multiple pixel combinations will be mapped to the same LUT entry. One of these clusters contains
2N umComponents·(8−bit_depth) pixel combinations, where NumComponents denotes the dimension of the
LUT, which is specified in create_class_svm. By choosing ’class_selection’ = ’best’, the class that
appears most often in the cluster is stored in the LUT. For ’class_selection’ = ’fast’, only one pixel of the
cluster, i.e., the pixel with the smallest value (component-wise), is classified. The returned class is stored in
the LUT. In this case, the accuracy of the subsequent classification could become lower. On the other hand,
the runtime needed to create the LUT can be reduced, which is proportional to the maximal needed storage
of the LUT, which is defined with 2N umComponents·bit_depth+2 .
List of values: ’fast’, ’best’
Default: ’fast’
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name-array ; string
Names of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamName ∈ {’bit_depth’, ’class_selection’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value-array ; string / integer
Values of the generic parameters that can be adjusted for the LUT classifier creation.
Default: []
Suggested values: GenParamValue ∈ {8, 7, 6, ’fast’, ’best’}
. ClassLUTHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_lut ; handle
Handle of the LUT classifier.
Result
If the parameters are valid, the operator create_class_lut_svm returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
HALCON 24.11.1.0
540 CHAPTER 7 CLASSIFICATION
7.4 Misc
clear_class_train_data ( : : ClassTrainDataHandle : )
HALCON 24.11.1.0
542 CHAPTER 7 CLASSIFICATION
Parameters
. NumDim (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of dimensions of the feature vector.
Default: 10
. ClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
Example
Result
If the parameters are valid, the operator create_class_train_data returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Possible Successors
add_sample_class_knn, train_class_knn
Alternatives
create_class_svm, create_class_mlp
See also
select_feature_set_knn, read_class_knn
Module
Foundation
deserialize_class_train_data (
: : SerializedItemHandle : ClassTrainDataHandle )
get_sample_class_train_data ( : : ClassTrainDataHandle,
IndexSample : Features, ClassID )
HALCON 24.11.1.0
544 CHAPTER 7 CLASSIFICATION
and ClassID. Features is a feature vector of length NumDim (see create_class_train_data) and
ClassID is the class of the feature vector.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of training data for a classifier.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. ClassID (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Class of the training sample.
Result
If the parameters are valid, the operator get_sample_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
get_sample_num_class_train_data (
: : ClassTrainDataHandle : NumSamples )
See also
create_class_train_data
Module
Foundation
select_sub_feature_class_train_data ( : : ClassTrainDataHandle,
SubFeatureIndices : SelectedClassTrainDataHandle )
Select certain features from training data to create training data containing less features.
select_sub_feature_class_train_data selects certain features from the training data
in ClassTrainDataHandle and returns the subset in SelectedClassTrainDataHandle.
The features that should be selected can be chosen by SubFeatureIndices. If
set_feature_lengths_class_train_data was not called before, the indices refer to the columns.
If set_feature_lengths_class_train_data was called before, the grouping defined there is
relevant for the meaning of the indices. The entry n in the list selects then the n-th feature group. If
set_feature_lengths_class_train_data was called with names for the feature groups, those names
can be used instead of the indices.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. SubFeatureIndices (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer / string
Indices or names to select the subfeatures or columns.
. SelectedClassTrainDataHandle (output_control) . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the reduced training data.
HALCON 24.11.1.0
546 CHAPTER 7 CLASSIFICATION
Example
Result
If the parameters are valid, the operator select_sub_feature_class_train_data returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
serialize_class_train_data (
: : ClassTrainDataHandle : SerializedItemHandle )
Possible Successors
deserialize_class_train_data
See also
create_class_train_data, read_class_train_data
Module
Foundation
set_feature_lengths_class_train_data ( : : ClassTrainDataHandle,
SubFeatureLength, Names : )
HALCON 24.11.1.0
548 CHAPTER 7 CLASSIFICATION
Example
Result
If the parameters are valid, the operator set_feature_lengths_class_train_data returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
write_class_train_data writes the training data for classifiers ClassTrainDataHandle to the file
given by FileName. The classifier can be read again with read_class_train_data. The default HALCON
file extension for the training data is ’ctd’.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. FileName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . filename.write ; string
Name of the file in which the training data will be written.
File extension: .ctd
Result
write_class_train_data returns 2 (H_MSG_TRUE). An exception is raised if it was not possible to open
file FileName.
Execution Information
add_class_train_data_mlp ( : : MLPHandle,
ClassTrainDataHandle : )
HALCON 24.11.1.0
550 CHAPTER 7 CLASSIFICATION
Possible Successors
get_sample_class_mlp
Alternatives
add_sample_class_mlp
See also
create_class_mlp
Module
Foundation
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp, write_samples_class_mlp
Alternatives
read_samples_class_mlp
See also
clear_samples_class_mlp, get_sample_num_class_mlp, get_sample_class_mlp
Module
Foundation
HALCON 24.11.1.0
552 CHAPTER 7 CLASSIFICATION
Possible Predecessors
train_class_mlp, read_class_mlp
Alternatives
apply_dl_classifier, evaluate_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
clear_class_mlp ( : : MLPHandle : )
clear_samples_class_mlp ( : : MLPHandle : )
this case, the memory required for the training samples can be freed with clear_samples_class_mlp,
and hence memory can be saved. In the normal usage, in which the MLP is trained offline and written to
a file with write_class_mlp, it is typically unnecessary to call clear_samples_class_mlp because
write_class_mlp does not save the training samples, and hence the online process, which reads the MLP
with read_class_mlp, requires no memory for the training samples.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp(-array) ; handle
MLP handle.
Result
If the parameters are valid, the operator clear_samples_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
ni
(1) (1) (1)
X
aj = wji xi + bj , j = 1, . . . , nh
i=1
(1)
zj = tanh aj , j = 1, . . . , nh
(1) (1)
Here, the matrix wji and the vector bj are the weights of the input layer (first layer) of the MLP. In the hidden
layer (second layer), the activations zj are transformed in a first step by using linear combinations of the variables
in an analogous manner as above:
nh
(2) (2) (2)
X
ak = wkj zj + bk , k = 1, . . . , no
j=1
HALCON 24.11.1.0
554 CHAPTER 7 CLASSIFICATION
(2) (2)
Here, the matrix wkj and the vector bk are the weights of the second layer of the MLP.
The activation function used in the output layer can be determined by setting OutputFunction. For
OutputFunction = ’linear’, the data are simply copied:
(2)
yk = ak , k = 1, . . . , no
This type of activation function should be used for regression problems (function approximation). This activation
function is not suited for classification problems.
For OutputFunction = ’logistic’, the activations are computed as follows:
1
yk = (2)
, k = 1, . . . , no
1 + exp − ak
This type of activation function should be used for classification problems with multiple (NumOutput) indepen-
dent logical attributes as output. This kind of classification problem is relatively rare in practice.
For OutputFunction = ’softmax’, the activations are computed as follows:
(2)
exp ak
yk = Pno (2)
, k = 1, . . . , no
l=1 exp al
This type of activation function should be used for common classification problems with multiple (NumOutput)
mutually exclusive classes as output. In particular, OutputFunction = ’softmax’ must be used for the classifi-
cation of pixel data with classify_image_class_mlp.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the MLP. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification or evaluation.
For Preprocessing = ’normalization’, the feature vectors are normalized by subtracting the mean of the
training vectors and dividing the result by the standard deviation of the individual components of the training
vectors. Hence, the transformed feature vectors have a mean of 0 and a standard deviation of 1. The normalization
does not change the length of the feature vector. NumComponents is ignored in this case. This transformation can
be used if the mean and standard deviation of the feature vectors differs substantially from 0 and 1, respectively,
or for data in which the components of the feature vectors are measured in different units (e.g., if some of the
data are gray value features and some are region features, or if region features are mixed, e.g., ’circularity’
(unit: scalar) and ’area’ (unit: pixel squared)). In these cases, the training of the net will typically require fewer
iterations than without normalization.
For Preprocessing = ’principal_components’, a principal component analysis is performed. First, the feature
vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space) that
decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is 0 and
the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that the
transformed features that contain the most variation is contained in the first components of the transformed feature
vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to determine how many of the transformed feature vector components should be
used. Up to NumInput components can be selected. The operator get_prep_info_class_mlp can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated.
In contrast to the above three transformations, which can be used for all MLP types, the transformation spec-
ified by Preprocessing = ’canonical_variates’ can only be used if the MLP is used as a classifier with
OutputFunction = ’softmax’). The computation of the canonical variates is also called linear discrimi-
nant analysis. In this case, a transformation that first normalizes the training vectors and then decorrelates the
training vectors on average over all classes is computed. At the same time, the transformation maximally sepa-
rates the mean values of the individual classes. As for Preprocessing = ’principal_components’, the trans-
formed components are sorted by information content, and hence transformed components with little informa-
tion content can be omitted. For canonical variates, up to min(NumOutput − 1, NumInput) components can
be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_mlp. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the actual number of
input units of the MLP is determined by NumComponents, whereas NumInput determines the dimensionality
of the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transfor-
mations, the number of input variables, and thus usually also the number of hidden units can be reduced. With this,
the time needed to train the MLP and to evaluate and classify a feature vector is typically reduced.
Usually, NumHidden should be selected in the order of magnitude of NumInput and NumOutput. In many
cases, much smaller values of NumHidden already lead to very good classification results. If NumHidden is
chosen too large, the MLP may overfit the training data, which typically leads to bad generalization properties, i.e.,
the MLP learns the training data very well, but does not return very good results on unknown data.
create_class_mlp initializes the above described weights with random numbers. To ensure that the results of
training the classifier with train_class_mlp are reproducible, the seed value of the random number generator
is passed in RandSeed. If the training results in a relatively large error, it sometimes may be possible to achieve
a smaller error by selecting a different value for RandSeed and retraining an MLP.
After the MLP has been created, typically training samples are added to the MLP by repeatedly calling
add_sample_class_mlp or read_samples_class_mlp. After this, the MLP is typically trained us-
ing train_class_mlp. Hereafter, the MLP can be saved using write_class_mlp. Alternatively, the MLP
can be used immediately after training to evaluate data using evaluate_class_mlp or, if the MLP is used as
a classifier (i.e., for OutputFunction = ’softmax’), to classify data using classify_class_mlp.
The training of the MLP will usually result in very sharp boundaries between the different classes, i.e., the confi-
dence for one class will drop from close to 1 (within the region of the class) to close to 0 (within the region of a
different class) within a very narrow “band” in the feature space. If the classes do not overlap, this transition hap-
pens at a suitable location between the classes; if the classes overlap, the transition happens at a suitable location
within the overlapping area. While this sharp transition is desirable in many applications, in some applications
a smoother transition between different classes (i.e., a transition within a wider “band” in the feature space) is
desirable to reflect a level of uncertainty within the region in the feature space between the classes. Furthermore,
as described above, it may be desirable to prevent overfitting of the MLP to the training data. For these purposes,
the MLP can be regularized by using set_regularization_params_class_mlp.
An MLP, as defined above, has no inherent capability for novelty detection, i.e., it will classify a random fea-
ture vector into one of the classes with a confidence close to 1 (unless the random feature vector happens to
lie in a region of the feature space in which the training samples of different classes overlap). In some appli-
cations, however, it is desirable to reject feature vectors that do not lie close to any class, where “closesness”
defined by the proximity of the feature vector to the collection of feature vectors in the training set. To pro-
vide an MLP with the ability for novelty detection, i.e., to reject feature vectors that do not belong to any class,
an explicit rejection class can be created by setting NumOutput to the number of actual classes plus 1. Then,
set_rejection_params_class_mlp can be used to configure train_class_mlp to automatically gen-
erate samples for this rejection class.
The combination of regularization and an automatic generation of a rejection class is useful in many applications
since it provides a smooth transition between the actual classes and from the actual classes to the rejection class.
This reflects the requirement of these applications that only feature vectors within the area of the feature space
that corresponds to the training samples of each class should have a confidence close to 1, whereas random feature
vectors not belonging to any class should have a confidence close to 0, and that transitions between the classes
should be smooth, reflecting a growing degree of uncertainty the farther a feature vector lies from the respective
class. In particular, OCR applications sometimes have this requirement (see create_ocr_class_mlp).
A comparison of the MLP and the support vector machine (SVM) (see create_class_svm) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be preferred in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
HALCON 24.11.1.0
556 CHAPTER 7 CLASSIFICATION
Parameters
. NumInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the MLP.
Default: 20
Suggested values: NumInput ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumInput >= 1
. NumHidden (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of hidden units of the MLP.
Default: 10
Suggested values: NumHidden ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150}
Restriction: NumHidden >= 1
. NumOutput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of output variables (classes) of the MLP.
Default: 5
Suggested values: NumOutput ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150}
Restriction: NumOutput >= 1
. OutputFunction (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the activation function in the output layer of the MLP.
Default: ’softmax’
List of values: OutputFunction ∈ {’linear’, ’logistic’, ’softmax’}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’normalization’
List of values: Preprocessing ∈ {’none’, ’normalization’, ’principal_components’, ’canonical_variates’}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default: 10
Suggested values: NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumComponents >= 1
. RandSeed (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Seed value of the random number generator that is used to initialize the MLP with random values.
Default: 42
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
Example
Result
If the parameters are valid, the operator create_class_mlp returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information
HALCON 24.11.1.0
558 CHAPTER 7 CLASSIFICATION
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. MLPHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
Result
If the parameters are valid, the operator deserialize_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
560 CHAPTER 7 CLASSIFICATION
get_params_class_mlp returns the parameters of a multilayer perceptron (MLP) that were specified when
the MLP was created with create_class_mlp. This is particularly useful if the MLP was read from a file with
read_class_mlp. The output of get_params_class_mlp can, for example, be used to check whether the
feature vectors and, if necessary, the target data to be used with the MLP have the correct lengths. For a description
of the parameters, see create_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. NumInput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the MLP.
. NumHidden (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of hidden units of the MLP.
. NumOutput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of output variables (classes) of the MLP.
. OutputFunction (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of the activation function in the output layer of the MLP.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features.
Result
If the parameters are valid, the operator get_params_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
get_prep_info_class_mlp ( : : MLPHandle,
Preprocessing : InformationCont, CumInformationCont )
Compute the information content of the preprocessed feature vectors of a multilayer perceptron.
get_prep_info_class_mlp computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_mlp.
The information content is derived from the variations of the transformed components of the feature vector, i.e.,
it is computed solely based on the training data, independent of any error rate on the training data. The informa-
tion content is computed for all relevant components of the transformed feature vectors (NumInput for ’princi-
pal_components’ and min(NumOutput − 1, NumInput) for ’canonical_variates’, see create_class_mlp),
and is returned in InformationCont as a number between 0 and 1. To convert the information content into
a percentage, it simply needs to be multiplied by 100. The cumulative information content of the first n compo-
nents is returned in the n-th component of CumInformationCont, i.e., CumInformationCont contains
Result
If the parameters are valid, the operator get_prep_info_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
HALCON 24.11.1.0
562 CHAPTER 7 CLASSIFICATION
get_prep_info_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Execution Information
get_regularization_params_class_mlp ( : : MLPHandle,
GenParamName : GenParamValue )
get_rejection_params_class_mlp ( : : MLPHandle,
GenParamName : GenParamValue )
Possible Predecessors
create_class_mlp
Possible Successors
train_class_mlp
Module
Foundation
HALCON 24.11.1.0
564 CHAPTER 7 CLASSIFICATION
* Train an MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'canonical_variates', NumComp, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Reclassify the training samples
get_sample_num_class_mlp (MLPHandle, NumSamples)
for I := 0 to NumSamples-1 by 1
get_sample_class_mlp (MLPHandle, I, Data, Target)
classify_class_mlp (MLPHandle, Data, 1, Class, Confidence)
Result := gen_tuple_const(NumOut,0)
Result[Class] := 1
Diffs := Target-Result
if (sum(fabs(Diffs)) > 0)
* Sample has been classified incorrectly
endif
endfor
Result
If the parameters are valid, the operator get_sample_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Return the number of training samples stored in the training data of a multilayer perceptron.
get_sample_num_class_mlp returns in NumSamples the number of training samples that are stored in
the multilayer perceptron (MLP) given by MLPHandle. get_sample_num_class_mlp should be called
before the individual training samples are accessed with get_sample_class_mlp, e.g., for the purpose of
reclassifying the training data (see get_sample_class_mlp).
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If MLPHandle is valid, the operator get_sample_num_class_mlp returns the value 2 (H_MSG_TRUE). If
necessary an exception is raised.
Execution Information
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Successors
classify_class_mlp, evaluate_class_mlp, create_class_lut_mlp
HALCON 24.11.1.0
566 CHAPTER 7 CLASSIFICATION
Alternatives
read_dl_classifier
See also
create_class_mlp, write_class_mlp
Module
Foundation
HALCON 24.11.1.0
568 CHAPTER 7 CLASSIFICATION
Result
If the parameters are valid, the operator select_feature_set_mlp returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
set_regularization_params_class_mlp ( : : MLPHandle,
GenParamName, GenParamValue : )
’num_outer_iterations’: This parameter determines whether the regularization parameters should be determined
automatically (GenParamValue >= 1) or manually (GenParamValue = 0, default), as described be-
low in the sections “Technical Background” and “Automatic Determination of the Regularization Parame-
ters”. As described in detail in the section “Automatic Determination of the Regularization Parameters”,
’num_outer_iterations’ should not be set too large (in the range of 1 to 5) to enable manual checking of the
convergence of the automatic determination of the regularization parameters.
’num_inner_iterations’: This parameter potentially enables somewhat faster convergence of the automatic deter-
mination of the regularization parameters, as described below in the section “Automatic Determination of the
Regularization Parameters”. It should typically be left at its default value of 1.
HALCON 24.11.1.0
570 CHAPTER 7 CLASSIFICATION
’weight_prior’: On the one hand, this selects the regularization model to be used, as described below in the section
“Technical Background”. On the other hand, if manual determination of the regularization parameters has
been selected (i.e., ’num_outer_iterations’ = 0), the regularization parameters are set with GenParamName,
whereas the initial values of the regularization parameters are set if automatic determination of the regular-
ization parameters has been selected (i.e., ’num_outer_iterations’ >= 1), as described below in the section
“Automatic Determination of the Regularization Parameters”. Manual determination of the regularization
parameters (see the section “Regularization Parameters” below) is only realistic if a single regularization
parameter is used. In all other cases, the regularization parameters should be determined automatically.
’noise_prior’: This allows to specify a noise prior for MLPs that have been configured for regression, as described
below in the section “Application Areas”. If manual determination of the regularization parameters has been
selected, the noise prior is set with GenParamName, whereas the initial value of the noise prior is set if
automatic determination of the regularization parameters has been selected. Typically, it is only useful to use
this parameter if the regularization parameters are determined automatically.
Please note that the automatic determination of the regularization parameters requires a very large amount of
memory and runtime, as described in detail in the section “Complexity” below. Therefore, NumHidden should
not be selected too large when the MLP is created with create_class_mlp. For example, normal OCR
applications seldom require NumHidden to be larger than 30-60.
Application Areas
As described at create_class_mlp, it may be desirable to regularize the MLP to enforce a smoother transition
of the confidences between the different classes and to prevent overfitting of the MLP to the training data. To
achieve this, a penalty for large MLP weights (which are the main reason for very sharp transitions between classes)
can be added to the training of the MLP in train_class_mlp by setting GenParamName to ’weight_prior’
and setting GenParamValue to a value > 0.
If the MLP has been configured for regression (i.e., if OutputFunction was set to ’linear’ in
create_class_mlp), an inverse variance of the expected noise in the data can be specified by setting
GenParamName to ’noise_prior’ and setting GenParamValue to a value > 0. Setting the noise prior only
has an effect if a weight prior has been specified. In this case, it can be used to weight the data error term (the
output error of the MLP) against the weight error term.
As described in more detail below, the regularization parameters of the MLP may be determined automatically (at
the expense of significantly increased training times) by setting GenParamName to ’num_outer_iterations’ and
setting GenParamValue to a value > 0.
Technical Background
There are three different kinds of penalty terms that can be set with ’weight_prior’. Note that in the fol-
(l) (l)
lowing the parameters wji and bk refer to the weights of the different layers of the MLP, as described in
create_class_mlp.
If a single value α is specified, all MLP weights are penalized equally by adding the following term to the opti-
mization in train_class_mlp:
nh
ni X nh nh X
no no
α X (1) 2
X (1) 2
X (2) 2
X (2) 2
EW = w + bj + wkj + bk
2 i=1 j=1 ji j=1 j=1 k=1 k=1
Alternatively, four values [αw1 , αb1 , αw2 , αb2 ] can be specified. These four parameters enable the individual regu-
larization of the four groups of weights:
ni Xnh nh nh X
no no
αw1 X (1) 2 αb1 X (1) 2 αw2 X (2) 2 αb2 X (2) 2
EW = wji + bj + wkj + bk
2 i=1 j=1 2 j=1 2 j=1 2
k=1 k=1
Finally, ni + 3 values [α1 , . . . , αni , αb1 , αw2 , αb2 ] can be specified. These ni + 3 parameters enable the individual
regularization of each input variable x1 , . . . , xni and the regularization of the remaining three groups of weights:
ni nh nh nh X
no no
X αi X (1) 2 αb1 X (1) 2 αw2 X (2) 2 αb2 X (2) 2
EW = wji + b + wkj + bk
i=1
2 j=1
2 j=1 j 2 j=1 2
k=1 k=1
This kind of regularization is only useful in conjunction with the automatic determination of the regularization
parameters described below. If the automatic determination of the regularization parameters returns a very large
value of αj (compared to the smallest value of the ni values αi ), the corresponding input variable has little rele-
vance for the MLP output. If this is the case, it should be tested whether the input variable can be omitted from the
input of the MLP without negatively affecting the MLP’s performance. The advantage of omitting irrelevant input
variables is an increased speed of the MLP for classification.
The parameters α can be regarded as the inverse variance of a Gaussian prior distribution on the MLP weights, i.e.,
they express an expectation about the size of the MLP weights. The larger the α are chosen, the smaller the MLP
weights will be.
Regularization Parameters
The larger the regularization parameter(s) ’weight_prior’ are chosen, the smoother the transition of the confidences
between the different classes will be. The required values for the regularization parameter(s) depend on the MLP,
especially the number of hidden units, the training data, and the scale of the training data (if no normalization
is used). Typically, a higher value for the regularization parameter(s) is necessary if the MLP has more hidden
units and if the training data consists of more points. For typical applications, the regularization parameters are
determined by verifying the MLP performance on a test data set that is independent from the training data set. If
an independent test data set is unavailable, cross validation can be used. Cross validation works by splitting the
data set into separate parts (for example, 80% of the data set for training and 20% for testing), training the MLP
with the training data set (the 80% of the data in the above example), and testing the MLP performance on the
test set (the 20% of the data in the above example). The procedure can be repeated for the other possible splits
of the data (in the 80%–20% example, there are five possible splits). This procedure can, for example, start with
relatively large values of the weight regularization parameters (which will typically result in misclassifications on
the test data set). The weight regularization parameters can then be decreased until an acceptable performance on
the test data sets is reached.
Automatic Determination of the Regularization Parameters
The regularization parameters, i.e., the weight priors and the noise prior, can also be determined automati-
cally by train_class_mlp using the so-called evidence procedure (for details about the evidence procedure,
please refer to the articles in the section “References” below). This training mode can be selected by setting
GenParamName to ’num_outer_iterations’ and setting GenParamValue to a value > 0. Note that this typically
results in training times that are one to three orders of magnitude larger than simply training the MLP with fixed
regularization parameters.
The evidence procedure is an iterative algorithm that performs the following two steps for a number of outer itera-
tions: first, the network is trained using the current values of the regularization parameters; next, the regularization
parameters are re-estimated using the weights of the optimized MLP. In the first iteration, the weight priors and
noise priors specified with ’weight_prior’ and ’noise_prior’ are used. Thus, for the automatic determination of the
regularization parameters, the values specified by the user serve as the starting parameters for the evidence proce-
dure. The starting parameters for the weight priors should not be set too large because this might over-regularize
the training and may result in badly determined regularization parameters. The initial values for the weight priors
should typically be in the range 0.01-0.1.
The number of outer iterations can be set by setting GenParamName to ’num_outer_iterations’ and setting
GenParamValue to a value > 0. If GenParamValue is set to 0 (this is the default value), the evidence proce-
dure is not executed and the MLP is simply trained using the user-specified regularization parameters.
The number of outer iterations should be set high enough to ensure the convergence of the regularization parame-
ters. In contrast to the training of the MLP’s weights, a numerical convergence criterion is typically very difficult
to specify and some human judgment is typically required to decide whether the regularization parameters have
converged sufficiently. Therefore, it might not be possible to set the number of outer iterations a-priori to ensure
convergence of the regularization parameters. In these cases, the outer loop over the steps of the evidence pro-
cedure can be implemented manually by setting ’num_outer_iterations’ to 1 and calling train_class_mlp
repeatedly. This has the advantage that the weight priors and noise prior can be queried after each iteration and can
be checked manually for convergence. In this approach, the performance of the MLP can even be checked after
each iteration on an independent test set to check the generalization performance of the classifier.
If the number of outer iterations has been determined (approximately) for a class of applications, it may be possible
to reduce the run time of the training (if MLPs should be trained in the future with similar data sets) by setting
GenParamName to ’num_inner_iterations’ and setting GenParamValue to a value > 1 (the default value is 1)
and by reducing the number of outer iterations. The number of outer iterations can typically not be reduced by the
same factor by which the number of inner iterations is increased. Using this approach, the run time of the training
HALCON 24.11.1.0
572 CHAPTER 7 CLASSIFICATION
can be optimized. However, this approach is only useful if many MLPs are trained with similar data sets. If this is
not the case, ’num_inner_iterations’ should be left at its default value of 1.
The automatically determined weight priors and noise prior can be queried after the training us-
ing get_regularization_params_class_mlp by setting GenParamName to ’weight_prior’ or
’noise_prior’, respectively.
In addition to the weight prior and noise prior, the evidence procedure determines an estimate of the
number of parameters of the MLP that can be determined well using the training data. This re-
sult can be queried using get_regularization_params_class_mlp by setting GenParamName to
’num_well_determined_params’. Alternatively, the fraction of well-determined parameters can be queried by
setting GenParamName to ’fraction_well_determined_params’. If the number of well-determined parameters is
significantly smaller than nw (where nw is the number of weights in the MLP, as described in the section “Com-
plexity” below) or the fraction of well-determined parameters is significantly smaller than 1, consider reducing the
number of hidden units or, if the number of hidden units cannot be decreased without increasing the error rate of
the MLP significantly, consider performing a preprocessing that reduces the number of input variables to the net,
i.e., canonical variates or principal components.
Please note that the number of well-determined parameters can only be determined after the weight priors and
noise prior have been determined. This is the reason why the evidence procedure ends with the determination of
the regularization parameters and not with the training of the MLP weights. Hence, after the evidence procedure
the MLP will not have been trained with the latest regularization parameters. This should make no difference if
they have converged. If you want the training to end with an optimization of the weights using the latest values of
the regularization parameters, you can set ’num_outer_iterations’ to 0 and can call train_class_mlp again.
If you do so, please note, however, that the number of well-determined parameters may change and, therefore, the
value returned by get_regularization_params_class_mlp is technically inconsistent.
Saved Parameters
Note that the parameters ’num_outer_iterations’ and ’num_inner_iterations’ only affect the training of
the MLP. Therefore, they are not saved when the MLP is stored using write_class_mlp or
serialize_class_mlp. Thus, they must be set anew if the MLP is loaded again using read_class_mlp
or deserialize_class_mlp and if training using the automatic determination of the regularization
parameters should be continued. All other parameters described above (’weight_prior’, ’noise_prior’,
’num_well_determined_params’, and ’fraction_well_determined_params’) are saved.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the regularization parameter to set.
Default: ’weight_prior’
List of values: GenParamName ∈ {’weight_prior’, ’noise_prior’, ’num_outer_iterations’,
’num_inner_iterations’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Value of the regularization parameter.
Default: 1.0
Suggested values: GenParamValue ∈ {0.01, 0.1, 1.0, 10.0, 100.0, 0, 1, 2, 3, 5, 10, 15, 20}
Example
endfor
* Set up the automatic determination of the regularization
* parameters.
set_regularization_params_class_mlp (MLPHandle, 'weight_prior', \
[0.01,0.01,0.01,0.01])
set_regularization_params_class_mlp (MLPHandle, \
'num_outer_iterations', 10)
* Train the MLP.
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
* Read out the estimate of the number of well-determined
* parameters.
get_regularization_params_class_mlp (MLPHandle, \
'fraction_well_determined_params', \
FractionParams)
* If FractionParams differs substantially from 1, consider reducing
* NumHidden appropriately and consider performing a preprocessing that
* reduces the number of input variables to the net, i.e., canonical
* variates or principal components.
write_class_mlp (MLPHandle, 'classifier.mlp')
HALCON 24.11.1.0
574 CHAPTER 7 CLASSIFICATION
Complexity
Let ni denote the number of input units of the MLP (i.e., ni = NumHidden or ni = NumComponents,
depending on the value of Preprocessing, as described at create_class_mlp), nh the number of hidden
units, and no the number of output units. Then, the number of weights of the MLP is nw = (ni +1)nh +(nh +1)no .
Let nd denote the number of training samples. Let nM denote the number of iterations set with MaxIterations
in train_class_mlp. Let nO and nI denote the number of outer and inner iterations, respectively.
The run time of the training without regularization or with regularization with fixed regularization parameters is of
complexity O(nM nw nd ). In contrast, the runtime of the training with automatic determination of the regularization
parameters is of complexity
O(nO nM nw nd ) + O(nO n2w nd ) + O(nO n3w ) + O(nO nI n3w ).
The training without regularization or with regularization with fixed regularization parameters requires at least
48nw + 24nh nd + 16no nd bytes of memory. The training with automatic determination of the regularization
parameters requires at least 24n2w + 48nw + 72nh nd + 56no nd bytes of memory. Under special circumstances,
another 24n2w + 8nw bytes of memory are required.
Result
If the parameters are valid, the operator set_regularization_params_class_mlp returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
• MLPHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_mlp
Possible Successors
get_regularization_params_class_mlp, train_class_mlp
References
David J. C. MacKay: “Bayesian Interpolation”; Neural Computation 4(3):415-447; 1992.
David J. C. MacKay: “A Practical Bayesian Framework for Backpropagation Networks”; Neural Computation
4(3):448-472; 1992.
David J. C. MacKay: “The Evidence Framework Applied to Classification Networks”; Neural Computation 4(5):
720-736; 1992.
David J. C. MacKay: “Comparison of Approximate Methods for Handling Hyperparameters”; Neural Computation
11(5):1035-1068; 1999.
Module
Foundation
’rejection_class_index’: By default, the last class serves as the rejection class. If another class should be used,
GenParamName must be set to ’rejection_class_index’ and GenParamValue to the class index.
’sampling_strategy’: Currently, three strategies exist to generate samples for the rejection class during the
training of the MLP. These strategies can be selected by setting GenParamName to ’sampling_strategy’
and GenParamValue to ’hyperbox_around_all_classes’, ’hyperbox_around_each_class’, or ’hyper-
box_ring_around_each_class’. The sampling strategy ’hyperbox_around_all_classes’ takes the bound-
ing box of all training samples that have been provided so far. The sampling strategy ’hyper-
box_around_each_class’ is similar with the only difference that the bounding box around each class
is taken as the area where the rejection samples are generated. The sampling strategy ’hyper-
box_ring_around_each_class’ generates samples only in the enlarged areas around the bounding box of each
class, thus generating a hyperbox ring around the original samples. Please note that with increasing dimen-
sionality the sampling strategies ’hyperbox_around_each_class’ and ’hyperbox_ring_around_each_class’
provide the same result. If no rejection class sampling strategy should be used, which is the default,
GenParamValue must be set to ’no_rejection_class’.
’hyperbox_tolerance’: The factor ’hyperbox_tolerance’ describes by what amount the bounding box should be
enlarged in all dimensions. Then, inside this box samples are randomly generated from a uniform distribution.
The default value is 0.2.
’rejection_sample_factor’: The number of rejection samples is the number of provided samples multiplied by
’rejection_sample_factor’. If not enough samples are generated, the rejection class may not be classified
correctly. If the rejection class has too many samples, the normal classes are classified as rejection class. The
default value is 1.0. Note that the training time will increase by a factor of 1 + f , where f is the value of
’rejection_sample_factor’.
’random_seed’: To ensure reproducible results, a random seed can be set with ’random_seed’. The default value
is 42.
Because this operator only parametrizes the training of the MLP, the values are not saved by write_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the generic parameters.
Default: ’sampling_strategy’
List of values: GenParamName ∈ {’sampling_strategy’, ’hyperbox_tolerance’, ’rejection_sample_factor’,
’random_seed’, ’rejection_class_index’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string / real / integer
Values of the generic parameters.
Default: ’hyperbox_around_all_classes’
List of values: GenParamValue ∈ {’no_rejection_class’, ’hyperbox_around_all_classes’,
’hyperbox_around_each_class’, ’hyperbox_ring_around_each_class’}
Result
If the parameters are valid, the operator set_rejection_params_class_mlp returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
576 CHAPTER 7 CLASSIFICATION
value depends on the number of training samples as well as the number of output variables of the MLP. Also here,
values between 0.00001 and 1 should typically be used. The optimization is terminated if the weight change is
smaller than WeightTolerance and the change of the error value is smaller than ErrorTolerance. In any
case, the optimization is terminated after at most MaxIterations iterations. It should be noted that, depending
on the size of the MLP and the number of training samples, the training can take from a few seconds to several
hours.
On output, train_class_mlp returns the error of the MLP with the optimal weights on the training samples
in Error. Furthermore, ErrorLog contains the error value as a function of the number of iterations. With
this, it is possible to decide whether a second training of the MLP with the same training data without creating
the MLP anew makes sense. If ErrorLog is regarded as a function, it should drop off steeply initially, while
leveling out very flatly at the end. If ErrorLog is still relatively steep at the end, it usually makes sense to call
train_class_mlp again. It should be noted, however, that this mechanism should not be used to train the
MLP successively with MaxIterations = 1 (or other small values for MaxIterations) because this will
substantially increase the number of iterations required to train the MLP. Note that if an automatic determination of
the regularization parameters has been specified with set_regularization_params_class_mlp, Error
and ErrorLog refer to the last training that was executed in the evidence procedure. If the error log should be
monitored within the individual iterations of the evidence procedure, the outer iteration of the evidence procedure
must be implemented explicitly, as described at set_regularization_params_class_mlp.
Parameters
. MLPHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_mlp ; handle
MLP handle.
. MaxIterations (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Maximum number of iterations of the optimization algorithm.
Default: 200
Suggested values: MaxIterations ∈ {20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 220, 240, 260, 280,
300}
. WeightTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for the difference of the weights of the MLP between two iterations of the optimization algorithm.
Default: 1.0
Suggested values: WeightTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction: WeightTolerance >= 1.0e-8
. ErrorTolerance (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Threshold for the difference of the mean error of the MLP on the training data between two iterations of the
optimization algorithm.
Default: 0.01
Suggested values: ErrorTolerance ∈ {1.0, 0.1, 0.01, 0.001, 0.0001, 0.00001}
Restriction: ErrorTolerance >= 1.0e-8
. Error (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Mean error of the MLP on the training data.
. ErrorLog (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Mean error of the MLP on the training data as a function of the number of iterations of the optimization
algorithm.
Example
* Train an MLP
create_class_mlp (NumIn, NumHidden, NumOut, 'softmax', \
'normalization', 1, 42, MLPHandle)
read_samples_class_mlp (MLPHandle, 'samples.mtf')
train_class_mlp (MLPHandle, 100, 1, 0.01, Error, ErrorLog)
write_class_mlp (MLPHandle, 'classifier.mlp')
Result
If the parameters are valid, the operator train_class_mlp returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
train_class_mlp may return the error 9211 (Matrix is not positive definite) if Preprocessing = ’canon-
ical_variates’ is used. This typically indicates that not enough training samples have been stored for each class.
HALCON 24.11.1.0
578 CHAPTER 7 CLASSIFICATION
Execution Information
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_sample_class_mlp, read_samples_class_mlp,
set_regularization_params_class_mlp
Possible Successors
evaluate_class_mlp, classify_class_mlp, write_class_mlp, create_class_lut_mlp
Alternatives
train_dl_classifier_batch, read_class_mlp
See also
create_class_mlp
References
Christopher M. Bishop: “Neural Networks for Pattern Recognition”; Oxford University Press, Oxford; 1995.
Andrew Webb: “Statistical Pattern Recognition”; Arnold, London; 1999.
Module
Foundation
Possible Predecessors
train_class_mlp
Possible Successors
clear_class_mlp
See also
create_class_mlp, read_class_mlp, write_samples_class_mlp
Module
Foundation
add_class_train_data_svm ( : : SVMHandle,
ClassTrainDataHandle : )
HALCON 24.11.1.0
580 CHAPTER 7 CLASSIFICATION
• SVMHandle
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
create_class_svm, create_class_train_data
Possible Successors
get_sample_class_svm
Alternatives
add_sample_class_svm
See also
create_class_svm
Module
Foundation
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Features (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample to be stored.
. Class (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Class of the training sample to be stored.
Result
If the parameters are valid the operator add_sample_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
582 CHAPTER 7 CLASSIFICATION
clear_class_svm ( : : SVMHandle : )
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
classify_class_svm
See also
create_class_svm, read_class_svm, write_class_svm, train_class_svm
Module
Foundation
clear_samples_class_svm ( : : SVMHandle : )
HALCON 24.11.1.0
584 CHAPTER 7 CLASSIFICATION
For a binary classification problem in which the classes are linearly separable the SVM algorithm selects data
vectors from the training set that are utilized to construct the optimal separating hyperplane between different
classes. This hyperplane is optimal in the sense that the margin between the convex hulls of the different classes
is maximized. The training patterns that are located at the margin define the hyperplane and are called support
vectors (SV).
Classification of a feature vector z is performed with the following formula:
nsv
!
X
f (z) = sign αi yi < xi , z > +b
i=1
Here, xi are the support vectors, yi encodes their class membership (±1) and αi the weight coefficients. The dis-
tance of the hyperplane to the origin is b. The α and b are determined during training with train_class_svm.
Note that only a subset of the original training set (nsv : number of support vectors) is necessary for the definition
of the decision boundary and therefore data vectors that are not support vectors are discarded. The classification
speed depends on the evaluation of the dot product between support vectors and the feature vector to be classified,
and hence depends on the length of the feature vector and the number nsv of support vectors.
For classification problems in which the classes are not linearly separable the algorithm is extended in two ways.
First, during training a certain amount of errors (overlaps) is compensated with the use of slack variables. This
means that the α are upper bounded by a regularization constant. To enable an intuitive control of the amount of
training errors, the Nu-SVM version of the training algorithm is used. Here, the regularization parameter Nu is an
asymptotic upper bound on the number of training errors and an asymptotic lower bound on the number of support
vectors. As a rule of thumb, the parameter Nu should be set to the prior expectation of the application’s specific
error ratio, e.g., 0.01 (corresponding to a maximum training error of 1%). Please note that a too big value for Nu
might lead to an infeasible training problem, i.e., the SVM cannot be trained correctly (see train_class_svm
for more details). Since this can only be determined during training, an exception can only be raised there. In this
case, a new SVM with Nu chosen smaller must be created.
Second, because the above SVM exclusively calculates dot products between the feature vectors, it is possible to
incorporate a kernel function into the training and testing algorithm. This means that the dot products are substi-
tuted by a kernel function, which implicitly performs the dot product in a higher dimensional feature space. Given
the appropriate kernel transformation, an originally not linearly separable classification task becomes linearly sep-
arable in the higher dimensional feature space.
Different kernel functions can be selected with the parameter KernelType. For KernelType = ’linear’ the
dot product, as specified in the above formula is calculated. This kernel should solely be used for linearly or nearly
linearly separable classification tasks. The parameter KernelParam is ignored here.
The radial basis function (RBF) KernelType = ’rbf’ is the best choice for a kernel function because it achieves
good results for many classification tasks. It is defined as:
Here, the parameter KernelParam is used to select γ. The intuitive meaning of γ is the amount of influence of
a support vector upon its surroundings. A big value of γ (small influence on the surroundings) means that each
training vector becomes a support vector. The training algorithm learns the training data “by heart”, but lacks any
generalization ability (over-fitting). Additionally, the training/classification times grow significantly. A too small
value for γ (big influence on the surroundings) leads to few support vectors defining the separating hyperplane
(under-fitting). One typical strategy is to select a small γ-Nu pair and consecutively increase the values as long as
the recognition rate increases.
With KernelType = ’polynomial_homogeneous’ or ’polynomial_inhomogeneous’, polynomial kernels can be
selected. They are defined in the following way:
The degree of the polynomial kernel must be set with KernelParam. Please note that a too high degree polyno-
mial (d > 10) might result in numerical problems.
As a rule of thumb, the RBF kernel provides a good choice for most of the classification problems and should
therefore be used in almost all cases. Nevertheless, the linear and polynomial kernels might be better suited for
certain applications and can be tested for comparison. Please note that the novelty-detection Mode and the operator
reduce_class_svm are provided only for the RBF kernel.
Mode specifies the general classification task, which is either how to break down a multi-class decision problem to
binary sub-cases or whether to use a special classifier mode called ’novelty-detection’. Mode = ’one-versus-all’
creates a classifier where each class is compared to the rest of the training data. During testing the class with the
largest output (see the classification formula without sign) is chosen. Mode = ’one-versus-one’ creates a binary
classifier between each single class. During testing a vote is cast and the class with the majority of the votes
is selected. The optimal Mode for multi-class classification depends on the number of classes. Given n classes
’one-versus-all’ creates n classifiers, whereas ’one-versus-one’ creates n(n − 1)/2. Note that for a binary decision
task ’one-versus-one’ would create exactly one, whereas ’one-versus-all’ unnecessarily creates two symmetric
classifiers. For few classes (approximately up to 10) ’one-versus-one’ is faster for training and testing, because the
sub-classifier all consist of fewer training data and result in overall fewer support vectors. In case of many classes
’one-versus-all’ is preferable, because ’one-versus-one’ generates a prohibitively large amount of sub-classifiers,
as their number increases to the square of the number of classes.
A special case of classification is Mode = ’novelty-detection’, where the test data is classified only with regard to
membership to the training data, i.e., NumClasses must be set to 1. The separating hyperplane lies around the
training data and thereby implicitly divides the training data from the rejection class. The advantage is that the
rejection class is not defined explicitly, which is difficult to do in certain applications like texture classification.
The resulting support vectors are all lying at the border. With the parameter Nu, the ratio of outliers in the training
data set is specified. Note, that when classifying in the ’novelty-detection’ mode, the class of the training data is
returned with index 1 and the rejection class is returned with index 0. Thus, the first class serves as rejection class.
In contrast, when using the MLP classifier, the last class serves as rejection class by default.
The parameters Preprocessing and NumComponents can be used to specify a preprocessing of the feature
vectors. For Preprocessing = ’none’, the feature vectors are passed unaltered to the SVM. NumComponents
is ignored in this case.
For all other values of Preprocessing, the training data set is used to compute a transformation of the feature
vectors during the training as well as later in the classification.
For Preprocessing = ’normalization’, the feature vectors are normalized. In case of a polynomial kernel, the
minimum and maximum value of the training data set is transformed to -1 and +1. In case of the RBF kernel, the
data is normalized by subtracting the mean of the training vectors and dividing the result by the standard deviation
of the individual components of the training vectors. Hence, the transformed feature vectors have a mean of 0 and
a standard deviation of 1. The normalization does not change the length of the feature vector. NumComponents
is ignored in this case. This transformation can be used if the mean and standard deviation of the feature vectors
differs substantially from 0 and 1, respectively, or for data in which the components of the feature vectors are
measured in different units (e.g., if some of the data are gray value features and some are region features, or
if region features are mixed, e.g., ’circularity’ (unit: scalar) and ’area’ (unit: pixel squared)). The
normalization transformation should be performed in general, because it increases the numerical stability during
training/testing.
For Preprocessing = ’principal_components’, a principal component analysis (PCA) is performed. First, the
feature vectors are normalized (see above). Then, an orthogonal transformation (a rotation in the feature space)
that decorrelates the training vectors is computed. After the transformation, the mean of the training vectors is
0 and the covariance matrix of the training vectors is a diagonal matrix. The transformation is chosen such that
the transformed features that contain the most variation is contained in the first components of the transformed
feature vector. With this, it is possible to omit the transformed features in the last components of the feature vector,
which typically are mainly influenced by noise, without losing a large amount of information. The parameter
NumComponents can be used to determine how many of the transformed feature vector components should be
used. Up to NumFeatures components can be selected. The operator get_prep_info_class_svm can be
used to determine how much information each transformed component contains. Hence, it aids the selection of
NumComponents. Like data normalization, this transformation can be used if the mean and standard deviation of
the feature vectors differs substantially from 0 and 1, respectively, or for feature vectors in which the components
of the data are measured in different units. In addition, this transformation is useful if it can be expected that the
features are highly correlated. Please note that the RBF kernel is very robust against the dimensionality reduction
performed by PCA and should therefore be the first choice when speeding up the classification time.
The transformation specified by Preprocessing = ’canonical_variates’ first normalizes the training vectors
and then decorrelates the training vectors on average over all classes. At the same time, the transformation maxi-
HALCON 24.11.1.0
586 CHAPTER 7 CLASSIFICATION
mally separates the mean values of the individual classes. As for Preprocessing = ’principal_components’,
the transformed components are sorted by information content, and hence transformed components with little infor-
mation content can be omitted. For canonical variates, up to min(NumClasses−1, NumFeatures) components
can be selected. Also in this case, the information content of the transformed components can be determined with
get_prep_info_class_svm. Like principal component analysis, canonical variates can be used to reduce
the amount of data without losing a large amount of information, while additionally optimizing the separability of
the classes after the data reduction. The computation of the canonical variates is also called linear discriminant
analysis.
For the last two types of transformations (’principal_components’ and ’canonical_variates’), the length of input
data of the SVM is determined by NumComponents, whereas NumFeatures determines the dimensionality of
the input data (i.e., the length of the untransformed feature vector). Hence, by using one of these two transforma-
tions, the size of the SVM with respect to data length is reduced, leading to shorter training/classification times by
the SVM.
After the SVM has been created with create_class_svm, typically training samples are added to the SVM
by repeatedly calling add_sample_class_svm or read_samples_class_svm. After this, the SVM is
typically trained using train_class_svm. Hereafter, the SVM can be saved using write_class_svm.
Alternatively, the SVM can be used immediately after training to classify data using classify_class_svm.
A comparison of the SVM and the multi-layer perceptron (MLP) (see create_class_mlp) typically shows
that SVMs are generally faster at training, especially for huge training sets, and achieve slightly better recognition
rates than MLPs. The MLP is faster at classification and should therefore be preferred in time critical applications.
Please note that this guideline assumes optimal tuning of the parameters.
Parameters
. NumFeatures (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the SVM.
Default: 10
Suggested values: NumFeatures ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumFeatures >= 1
. KernelType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The kernel type.
Default: ’rbf’
List of values: KernelType ∈ {’linear’, ’rbf’, ’polynomial_inhomogeneous’, ’polynomial_homogeneous’}
. KernelParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Additional parameter for the kernel function. In case of RBF kernel the value for γ. For polynomial kernel the
degree
Default: 0.02
Suggested values: KernelParam ∈ {0.01, 0.02, 0.05, 0.1, 0.5}
. Nu (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Regularization constant of the SVM.
Default: 0.05
Suggested values: Nu ∈ {0.0001, 0.001, 0.01, 0.05, 0.1, 0.2, 0.3}
Restriction: Nu > 0.0 && Nu < 1.0
. NumClasses (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes.
Default: 5
Suggested values: NumClasses ∈ {2, 3, 4, 5, 6, 7, 8, 9, 10}
Restriction: NumClasses >= 1
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The mode of the SVM.
Default: ’one-versus-one’
List of values: Mode ∈ {’novelty-detection’, ’one-versus-all’, ’one-versus-one’}
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’normalization’
List of values: Preprocessing ∈ {’none’, ’normalization’, ’principal_components’, ’canonical_variates’}
. NumComponents (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Default: 10
Suggested values: NumComponents ∈ {1, 2, 3, 4, 5, 8, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100}
Restriction: NumComponents >= 1
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
Example
Result
If the parameters are valid the operator create_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information
HALCON 24.11.1.0
588 CHAPTER 7 CLASSIFICATION
deserialize_class_svm deserializes a support vector machine (SVM) (including its training samples),
that was serialized by serialize_class_svm (see fwrite_serialized_item for an introduction
of the basic principle of serialization). The serialized support vector machine is defined by the handle
SerializedItemHandle. The deserialized values are stored in an automatically created support vector ma-
chine with the handle SVMHandle.
Parameters
. SerializedItemHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . serialized_item ; handle
Handle of the serialized item.
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
Result
If the parameters are valid, the operator deserialize_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Possible Predecessors
train_class_svm, read_class_svm
See also
create_class_svm
Module
Foundation
This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific
operators even though the handle is used as an input parameter by those operators.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
add_class_train_data_mlp, add_class_train_data_gmm, add_class_train_data_knn
See also
create_class_train_data
Module
Foundation
HALCON 24.11.1.0
590 CHAPTER 7 CLASSIFICATION
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. NumFeatures (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of input variables (features) of the SVM.
. KernelType (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The kernel type.
. KernelParam (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Additional parameter for the kernel.
. Nu (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Regularization constant of the SVM.
. NumClasses (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of classes of the test data.
. Mode (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
The mode of the SVM.
. Preprocessing (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
. NumComponents (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Preprocessing parameter: Number of transformed features (ignored for Preprocessing = ’none’ and
Preprocessing = ’normalization’).
Result
If the parameters are valid the operator get_params_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
get_prep_info_class_svm ( : : SVMHandle,
Preprocessing : InformationCont, CumInformationCont )
Compute the information content of the preprocessed feature vectors of a support vector machine
get_prep_info_class_svm computes the information content of the training vectors that have been
transformed with the preprocessing given by Preprocessing. Preprocessing can be set to ’princi-
pal_components’ or ’canonical_variates’. The preprocessing methods are described with create_class_svm.
The information content is derived from the variations of the transformed components of the feature vec-
tor, i.e., it is computed solely based on the training data, independent of any error rate on the training
data. The information content is computed for all relevant components of the transformed feature vec-
tors (NumFeatures for ’principal_components’ and min(NumClasses − 1, NumFeatures) for ’canoni-
cal_variates’, see create_class_svm), and is returned in InformationCont as a number between 0 and
1. To convert the information content into a percentage, it simply needs to be multiplied by 100. The cumulative
information content of the first n components is returned in the n-th component of CumInformationCont,
i.e., CumInformationCont contains the sums of the first n elements of InformationCont. To use
get_prep_info_class_svm, a sufficient number of samples must be added to the support vector machine
(SVM) given by SVMHandle by using add_sample_class_svm or read_samples_class_svm.
InformationCont and CumInformationCont can be used to decide how many components of the
transformed feature vectors contain relevant information. An often used criterion is to require that the trans-
formed data must represent x% (e.g., 90%) of the data. This can be decided easily from the first value
of CumInformationCont that lies above x%. The number thus obtained can be used as the value for
NumComponents in a new call to create_class_svm. The call to get_prep_info_class_svm al-
ready requires the creation of an SVM, and hence the setting of NumComponents in create_class_svm to
an initial value. However, when get_prep_info_class_svm is called, it is typically not known how many
components are relevant, and hence how to set NumComponents in this call. Therefore, the following two-
step approach should typically be used to select NumComponents: In a first step, an SVM with the maximum
number for NumComponents is created (NumFeatures for ’principal_components’ and min(NumClasses−
1, NumFeatures) for ’canonical_variates’). Then, the training samples are added to the SVM and are saved in
a file using write_samples_class_svm. Subsequently, get_prep_info_class_svm is used to deter-
mine the information content of the components, and with this NumComponents. After this, a new SVM with the
desired number of components is created, and the training samples are read with read_samples_class_svm.
Finally, the SVM is trained with train_class_svm.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Preprocessing (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of preprocessing used to transform the feature vectors.
Default: ’principal_components’
List of values: Preprocessing ∈ {’principal_components’, ’canonical_variates’}
. InformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Relative information content of the transformed feature vectors.
. CumInformationCont (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Cumulative information content of the transformed feature vectors.
Example
Result
HALCON 24.11.1.0
592 CHAPTER 7 CLASSIFICATION
If the parameters are valid the operator get_prep_info_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
get_prep_info_class_svm may return the error 9211 (Matrix is not positive definite) if Preprocessing
= ’canonical_variates’ is used. This typically indicates that not enough training samples have been stored for each
class.
Execution Information
Return a training sample from the training data of a support vector machine.
get_sample_class_svm reads out a training sample from the support vector machine (SVM) given by
SVMHandle that was added with add_sample_class_svm or read_samples_class_svm. The in-
dex of the sample is specified with IndexSample. The index is counted from 0, i.e., IndexSample
must be a number between 0 and NumSamples − 1, where NumSamples can be determined with
get_sample_num_class_svm. The training sample is returned in Features and Target. Features
is a feature vector of length NumFeatures (see create_class_svm), while Target is the index of the
class, ranging between 0 and NumClasses-1 (see add_sample_class_svm).
get_sample_class_svm can, for example, be used to reclassify the training data with
classify_class_svm in order to determine which training samples, if any, are classified incorrectly.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. IndexSample (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of the stored training sample.
. Features (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
Feature vector of the training sample.
. Target (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Target vector of the training sample.
Example
* Train an SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')
* Reclassify the training samples
Result
If the parameters are valid the operator get_sample_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Possible Predecessors
add_sample_class_svm, read_samples_class_svm, get_sample_num_class_svm,
get_support_vector_class_svm
Possible Successors
classify_class_svm
See also
create_class_svm
Module
Foundation
Return the number of training samples stored in the training data of a support vector machine.
get_sample_num_class_svm returns in NumSamples the number of training samples that are stored in
the support vector machine (SVM) given by SVMHandle. get_sample_num_class_svm should be called
before the individual training samples are accessed with get_sample_class_svm, e.g., for the purpose of
reclassifying the training data (see get_sample_class_svm).
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. NumSamples (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Number of stored training samples.
Result
If SVMHandle is valid the operator get_sample_num_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
594 CHAPTER 7 CLASSIFICATION
Possible Successors
get_sample_class_svm
See also
create_class_svm
Module
Foundation
get_support_vector_class_svm ( : : SVMHandle,
IndexSupportVector : Index )
Return the index of a support vector from a trained support vector machine.
The operator get_support_vector_class_svm maps a support vector of a trained SVM (given
in SVMHandle) to the original training data set. The index of the SV is specified with
IndexSupportVector. The index is counted from 0, i.e., IndexSupportVector must be a num-
ber between 0 and NumSupportVectors − 1, where NumSupportVectors can be determined with
get_support_vector_num_class_svm. The index of this SV in the training data is returned in Index.
This Index can be used for a query with get_sample_class_svm to obtain the feature vectors that become
support vectors. get_sample_class_svm can, for example, be used to visualize the support vectors.
Note that when using train_class_svm with a mode different from ’default’ or reducing the SVM with
reduce_class_svm, the returned Index will always be -1, i.e., it will be invalid. The reason for this is
that a consistent mapping between SV and training data becomes impossible.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. IndexSupportVector (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index of the stored support vector.
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Index of the support vector in the training set.
Result
If the parameters are valid the operator get_sample_class_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
get_support_vector_num_class_svm (
: : SVMHandle : NumSupportVectors, NumSVPerSVM )
HALCON 24.11.1.0
596 CHAPTER 7 CLASSIFICATION
Alternatives
add_sample_class_svm
See also
write_samples_class_svm, clear_samples_class_svm
Module
Foundation
Approximate a trained support vector machine by a reduced support vector machine for faster classification.
As described in create_class_svm, the classification time of a SVM depends on the number of kernel evalu-
ations between the support vectors and the feature vectors. While the length of the data vectors can be reduced in a
preprocessing step like ’principal_components’ or ’canonical_variates’ (see create_class_svm for details),
the number of resulting SV depends on the complexity of the classification problem. The number of SVs is deter-
mined during training. To further reduce classification time, the number of SVs can be reduced by approximating
the original separating hyperplane with fewer SVs than originally required. For this purpose, a copy of the orig-
inal SVM provided by SVMHandle is created and returned in SVMHandleReduced. This new SVM has the
same parametrization as the original SVM, but a different SV expansion. The training samples that are included in
SVMHandle are not copied. The original SVM is not modified by reduce_class_svm.
The reduction method is selected with Method. Currently, only a bottom up approach is supported, which itera-
tively merges SVs. The algorithm stops if either the minimum number of SVs is reached (MinRemainingSV)
or if the accumulated maximum error exceeds the threshold MaxError. Note that the approximation reduces the
complexity of the hyperplane and thereby leads to a deteriorated classification rate. A common approach is there-
fore to start from a small MaxError e.g., 0.001, and to increase its value step by step. To control the reduction
ratio, at each step the number of remaining SVs is determined with get_support_vector_num_class_svm
and the classification rate is checked on a separate test data set with classify_class_svm.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
Original SVM handle.
. Method (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of postprocessing to reduce number of SV.
Default: ’bottom_up’
List of values: Method ∈ {’bottom_up’}
. MinRemainingSV (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Minimum number of remaining SVs.
Default: 2
Suggested values: MinRemainingSV ∈ {2, 3, 4, 5, 7, 10, 15, 20, 30, 50}
Restriction: MinRemainingSV >= 2
. MaxError (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Maximum allowed error of reduction.
Default: 0.001
Suggested values: MaxError ∈ {0.0001, 0.0002, 0.0005, 0.001, 0.002, 0.005, 0.01, 0.02, 0.05}
Restriction: MaxError > 0.0
. SVMHandleReduced (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVMHandle of reduced SVM.
Example
* Train an SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')
HALCON 24.11.1.0
598 CHAPTER 7 CLASSIFICATION
Result
If the parameters are valid the operator train_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information
Possible Predecessors
train_class_svm, get_support_vector_num_class_svm
Possible Successors
classify_class_svm, write_class_svm, get_support_vector_num_class_svm
See also
train_class_svm
Module
Foundation
The optimization criterion is the classification rate of a two-fold cross-validation of the training data. The best
achieved value is returned in Score.
The parameters ’nu’ and ’gamma’ for the SVM that is used to classify can be set to ’auto’ by using the parameters
GenParamName and GenParamValue. If they are set to ’auto’, the estimated optimal ’nu’ and/or ’gamma’
is estimated. The automatic estimation of ’nu’ and ’gamma’ can take a substantial amount of time (up to days,
depending on the data set and the number of features).
Additionally, there is the parameter ’mode’ which can be either set to ’one-versus-all’ or ’one-versus-one’. An
explanation of the two modes as well as of the parameters ’nu’ and ’gamma’ as the kernel parameter of the radial
basis function (RBF) kernel can be found in create_class_svm.
Attention
This operator may take considerable time, depending on the size of the data set in the training file, and the number
of features.
Please note, that this operator should not be called, if only a small set of training data is available. Due to the risk of
overfitting the operator select_feature_set_svm may deliver a classifier with a very high score. However,
the classifier may perform poorly when tested.
Parameters
. ClassTrainDataHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_train_data ; handle
Handle of the training data.
. SelectionMethod (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Method to perform the selection.
Default: ’greedy’
List of values: SelectionMethod ∈ {’greedy’, ’greedy_oscillating’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of generic parameters to configure the selection process and the classifier.
Default: []
List of values: GenParamName ∈ {’nu’, ’gamma’, ’mode’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer / string
Values of generic parameters to configure the selection process and the classifier.
Default: []
Suggested values: GenParamValue ∈ {0.02, 0.05, ’auto’, ’one-versus-one’, ’one-versus-all’}
. SVMHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
A trained SVM classifier using only the selected features.
. SelectedFeatureIndices (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string-array ; string
The selected feature set, contains indices.
. Score (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real-array ; real
The achieved score using two-fold cross-validation.
Example
HALCON 24.11.1.0
600 CHAPTER 7 CLASSIFICATION
Result
If the parameters are valid, the operator select_feature_set_svm returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
HALCON 24.11.1.0
602 CHAPTER 7 CLASSIFICATION
training samples and then retraining it. Please note that the preprocessing (as described in create_class_svm)
is not changed when training with TrainMode = ’add_sv_to_train_set’.
Parameters
. SVMHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class_svm ; handle
SVM handle.
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real
Stop parameter for training.
Default: 0.001
Suggested values: Epsilon ∈ {0.00001, 0.0001, 0.001, 0.01, 0.1}
. TrainMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string / integer
Mode of training. For normal operation: ’default’. If SVs already included in the SVM should be used for
training: ’add_sv_to_train_set’. For alpha seeding: the respective SVM handle.
Default: ’default’
List of values: TrainMode ∈ {’default’, ’add_sv_to_train_set’}
Example
* Train an SVM
create_class_svm (NumFeatures, 'rbf', 0.01, 0.01, NumClasses,\
'one-versus-all', 'normalization', NumFeatures,\
SVMHandle)
read_samples_class_svm (SVMHandle, 'samples.mtf')
train_class_svm (SVMHandle, 0.001, 'default')
write_class_svm (SVMHandle, 'classifier.svm')
Result
If the parameters are valid the operator train_class_svm returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information
During execution of this operator, access to the value of this parameter must be synchronized if it is used across
multiple threads.
Possible Predecessors
add_sample_class_svm, read_samples_class_svm
Possible Successors
classify_class_svm, write_class_svm, create_class_lut_svm
Alternatives
train_dl_classifier_batch, read_class_svm
See also
create_class_svm
References
John Shawe-Taylor, Nello Cristianini: “Kernel Methods for Pattern Analysis”; Cambridge University Press, Cam-
bridge; 2004.
Bernhard Schölkopf, Alexander J.Smola: “Learning with Kernels”; MIT Press, London; 1999.
Module
Foundation
HALCON 24.11.1.0
604 CHAPTER 7 CLASSIFICATION
Possible Predecessors
add_sample_class_svm
Possible Successors
clear_samples_class_svm
See also
create_class_svm, get_prep_info_class_svm, read_samples_class_svm
Module
Foundation
Control
u := sin(x) + cos(y)
u = sin(x) + cos(y);
If the operator window is used for entering an assignment, assign must be entered into the operator combo box
as an operator name. This opens the parameter area, where the parameter Input represents the expression that
has to be evaluated to one value and assigned to the variable, i.e., this is the right side of the assignment. The
parameter Result gets the name of the variable, i.e., this is the left side of assignment.
Attention
In addition to the parameter type control, which is indicated in the parameter description, assign also supports
iconic variables and vector variables. For an assignment, the parameter types of the two parameters Input and
Result must be identical. For the assignment of iconic objects, the operator copy_obj is used internally.
Parameters
. Input (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
New value.
Default: 1
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Variable that has to be changed.
Example
Tuple1 := [1,0,3,4,5,6,7,8,9]
Val := sin(1.2) + cos(1.2)
Tuple2 := []
Result
If the expression is correct assign returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an error
code returned.
Alternatives
insert
Module
Foundation
605
606 CHAPTER 8 CONTROL
Areas[Radius-1] := Area
Areas[0,4,|Rad|-1] := 0
FileNames[0,2,4] := ['f1','f2','f3']
The operator assign_at replaces and extends the modifying version of the old insert operator.
Parameters
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Indices of the elements that have to be replaced by the new value(s).
Default: 0
Suggested values: Index ∈ {0, 1, 2, 3, 4, 5, 6}
Minimum increment: 1
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tuple(-array) ; integer / real / string
Value(s) that is to be assigned.
Default: 1
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tuple(-array) ; real / integer / string
Result tuple containing the assigned values.
Result
If the expression is correct assign_at returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an error
code returned.
Alternatives
assign, tuple_replace
Module
Foundation
break ( : : : )
Result
break (as an operator) always returns 2 (H_MSG_TRUE).
Alternatives
continue
See also
for, while, repeat, until, switch, case
Module
Foundation
case ( : : Constant : )
catch ( : : : Exception )
HALCON 24.11.1.0
608 CHAPTER 8 CONTROL
is ignored and program execution continues after the corresponding endtry operator. In contrast, in an error case
the program execution jumps directly from the operator where the error occurred (or from the throw operator) to
the catch operator of the surrounding try-catch block. The output control parameter Exception returns a
tuple that contains a predefined set of data describing the error in case an operator error occurred. If the exception
was thrown by the throw operator, an arbitrary user-defined tuple can be returned.
The most important data within the Exception tuple is the error code. Therefore, this is passed as the first item
of the Exception tuple and can be accessed directly with Exception[0]. However, all other data has to be
accessed through the operator dev_get_exception_data, because the order and the extent of the provided
data may change in future versions and may vary for different programming language exports. Especially, it has
to be taken into account that in the exported code there are some items of the error tuple that are not available and
others that might not be determined until they are requested (like error messages).
If the exception was thrown by an operator error, a HALCON error code (< 10000) or if the aborted operator
belongs to an extension package, a user-defined error code (> 10000) is returned as the error code. A list of all
HALCON error codes can be found in the appendix of the “Extension Package Programmer’s Manual”. The first
element of a user-defined Exception tuple thrown by the operator throw should be an error code ≥ 30000.
Additional tuple elements can be chosen without any restrictions.
If an operator error occurred within HDevelop or HDevEngine, the following information about the error is pro-
vided by the Exception tuple:
In most cases, for an automatic exception handling it is sufficient to use the HALCON error code. Additional data
is primarily passed in order to provide some information about the error condition to the developer of the HDevelop
program for debugging reasons. Attention: in the exported code, in general, information about the error location
will not be available.
Attention
The export of the operators try, catch, endtry, and throw is not supported for the language C, but only for
the languages C++, C# and VisualBasic/.NET. Only the latter support throwing exceptions across procedures.
Parameters
. Exception (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . exception-array ; integer / string
Tuple returning the exception data.
Result
catch always returns 2 (H_MSG_TRUE).
Possible Successors
dev_get_exception_data
See also
try, endtry, throw, dev_get_exception_data, dev_set_check
Module
Foundation
comment ( : : Comment : )
comment allows to add a comment of one line to the program. As parameter value, i.e., as comment, all characters
are allowed. If the operator window is used to enter a comment and if there are newlines in the comment line
parameter, one comment statement for every text line is inserted.
In the full text editor a comment is marked by entering an asterisk (’*’) as first non-whitespace character.
This operator has no effect on the program execution.
Parameters
. Comment (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Arbitrary sequence of characters.
Example
Result
comment is never executed.
Module
Foundation
continue ( : : : )
convert_tuple_to_vector_1d ( : : InputTuple,
SubTupleLength : ResultVector )
HALCON 24.11.1.0
610 CHAPTER 8 CONTROL
Result
If the values of the specified parameters are correct, convert_tuple_to_vector_1d returns 2
(H_MSG_TRUE). Otherwise, an exception is raised and an error code returned.
See also
convert_vector_to_tuple
Module
Foundation
default ( : : : )
else ( : : : )
Result
else (as operator) always returns 2 (H_MSG_TRUE).
Alternatives
if, elseif
See also
until, for, while
Module
Foundation
elseif ( : : Condition : )
endfor ( : : : )
endif ( : : : )
End of if command.
HALCON 24.11.1.0
612 CHAPTER 8 CONTROL
endswitch ( : : : )
endtry ( : : : )
endwhile ( : : : )
executable_expression ( : : Expression : )
• .clear()
• .insert()
• .remove()
For further details about these operations please refer to the HDevelop User’s Guide.
Even though Expression formally is presented as a control parameter, nonetheless it is also possible to execute
stand-alone operations with iconic vectors.
Parameters
. Expression (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-vector ; real / integer / string
Operation to be executed.
Example
Result
If the values of the specified parameters are correct, executable_expression returns 2 (H_MSG_TRUE).
Otherwise, an exception is raised and an error code returned.
Module
Foundation
exit ( : : : )
Terminate HDevelop.
exit terminates HDevelop. The operator is equivalent to the menu entry File . Quit. Internally and for
exported C++ code the C-function call exit(0) is used.
Example
Result
exit returns 0 (o.k.) to the calling environment of HDevelop = operating system.
See also
stop
Module
Foundation
HALCON 24.11.1.0
614 CHAPTER 8 CONTROL
’in_place’ - # The text is inserted in the procedure at the actual place, i.e., in between the neighboring program
lines.
’at_file_begin’ - #ˆˆ The text is exported at the very beginning of the exported file.
’before_procedure’ - #ˆ The text is exported immediately before the procedure it is defined in.
’after_procedure’ - #$ The text is exported immediately after the procedure it is defined in.
’at_file_end’ - #$$ The text is exported at the very end of the exported file.
In the program listing, export_def is not represented in normal operator syntax but marked by a special char-
acter sequence. The first character within the line is the export marker # that can be followed by a position marker
as listed above. If entering an export definition in the full text editor, please note that there must not be any spaces
before #.
For better readability, the export character sequence may be followed by one space character that is not interpreted
as part of the export text. All additional spaces are added to the export.
For lines that are exported within the current procedure, the export gets the same indentation as the current program
lines get. There is one exception: if the export text starts with # immediately after the export markers or the optional
space, the export text will not be indented at all, e.g.:
for Index := 1 to 5 by 1
\# \#ifdef MY\_SWITCH
\# int cnt = 100;
* an optional code block
\# \#endif
endfor
is exported to:
proc (...)
{
...
for (...)
{
\#ifdef MY\_SWITCH
int cnt = 100;
// an optional block
\#endif
}
...
}
An export definition can be activated and deactivated as any normal operator. Deactivated export definitions are
not exported.
Parameters
. Position (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Place where the export text is written.
List of values: Position ∈ {’in_place’, ’at_file_begin’, ’before_procedure’, ’after_procedure’,
’at_file_end’}
Starts a loop block that is usually executed for a fixed number of iterations.
Syntax in HDevelop: for Index := Start to End by Step
The for statement starts a loop block that is usually executed for a fixed number of iterations. The for block
ends at the corresponding endfor statement.
The number of iterations is defined by the Start value, the End value, and the increment value Step. All
of these parameters can be initialized with expressions or variables instead of constant values. Please note that
these loop parameters are evaluated only once, namely, immediately before the for loop is entered. They are
not re-evaluated after the loop cycles, i.e., any modifications of these variables within the loop body will have no
influence on the number of iterations.
The passed loop parameters must be either of type integer or real. If all input parameters are of type
integer, the Index variable will also be of type integer. In all other cases the Index variable will be
of type real.
At the beginning of each iteration the loop variable Index is compared to the End parameter. If the increment
value Step is positive, the for loop is executed as long as the Index variable is less than or equal to the End
parameter. If the increment value Step is negative, the for loop is executed as long as the Index variable is
greater than or equal to the End parameter.
Attention: If the increment value Step is set to a value of type real, it may happen that the last loop cycle is
omitted owing to rounding errors in case the Index variable is expected to match the End value exactly in the last
cycle. Hence, on some systems the following loop is not executed—as expected—for four times (with the Index
variable set to 1.3, 1.4, 1.5, and 1.6), but only three times because after three additions the index variable is slightly
greater than 1.6 due to rounding errors.
I:=[]
for Index := 1.3 to 1.6 by 0.1
I := [I,Index]
endfor
After the execution of the loop body, i.e., upon reaching the corresponding endfor statement or a continue
statement, the increment value (as initialized at the beginning of the for loop) is added to the current value of
the loop counter Index. Then, the loop condition is re-evaluated as described above. Depending on the result
the loop is either executed again or finished in which case execution continues with the first statement after the
corresponding endfor statement.
A break statement within the loop—that is not covered by a more internal block—leaves the loop immediately
and execution continues after the corresponding endfor statement. In contrast, the continue statement is used
to ignore the rest of the loop body in the current cycle and continue execution with adapting the Index variable
and re-evaluating the loop condition.
Attention: It is recommended to avoid modifying the Index variable of the for loop within its body.
If the for loop is stopped, e.g., by a stop statement or by pressing the Stop button, and if the PC is placed
manually by the user, the for loop is continued at the current iteration as long as the PC remains within the for
body or is set to the endfor statement. If the PC is set on the for statement (or before it) and executed again,
the loop is reinitialized and restarts at the beginning.
HALCON 24.11.1.0
616 CHAPTER 8 CONTROL
Parameters
. Start (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Start value of the loop variable.
Default: 1
. End (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
End value of the loop variable.
Default: 5
. Step (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Increment value of the loop variable.
Default: 1
. Index (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer / real
Loop variable.
Example
Result
If the values of the specified parameters are correct, for (as an operator) returns 2 (H_MSG_TRUE). Otherwise,
an exception is raised and an error code is returned.
Alternatives
while, until
See also
repeat, break, continue, endfor
Module
Foundation
global ( : : Declaration : )
not exported into one output file that contains all procedures together but into separate output files it will become
necessary to mark one of the global variable declarations as the place where the variable is defined. A set of
procedure export files that are linked to one library or application must contain exactly one definition of each
global variable in order to avoid both undefined symbols and multiple definitions.
In the program listing, global variable declarations are displayed and must be entered without parenthesis in order
to emphasize that the line is a declaration and not an executable operator. The syntax is as follows:
Parameters
. Declaration (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Global variable declaration: optional keyword ’def’, type, and variable name
Suggested values: Declaration ∈ {’object’, ’tuple’, ’def object’, ’def tuple’, ’object vector(1)’, ’tuple
vector(1)’, ’def object vector(1)’, ’def tuple vector(1)’}
Result
global is never executed.
Module
Foundation
if ( : : Condition : )
Conditional statement.
if is a conditional statement that starts an if block. The Condition parameter must evaluate to a Boolean or
integer expression.
If Condition evaluates to ’true’ (not 0), the following block body up to the next corresponding block state-
ment elseif, else, or endif is executed. Reaching the end of the block the execution continues after the
corresponding endif statement.
If Condition evaluates to ’false’ (0), the execution is continued at the next corresponding block statement
elseif, else, or endif.
Parameters
. Condition (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Condition for the if statement.
Default: 1
Result
If the condition is correct if (as operator) returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an
error code returned.
Alternatives
elseif, else
See also
for, while, until
Module
Foundation
import ( : : ProcedureSource : )
HALCON 24.11.1.0
618 CHAPTER 8 CONTROL
proc()
* unresolved procedure call
import ./the\_one\_dir
proc()
* resolves to ./the\_one\_dir/proc.hdvp
import ./the\_other\_dir
proc()
* resolves to ./the\_other\_dir/proc.hdvp
The parameter ProcedureSource points to the source of the external procedures. It can either be the path of
a directory that contains the procedures and/or the procedure libraries to be used or directly the file name of a
procedure library. In both cases, the path may either be absolute or relative. In the latter case, HDevelop interprets
the path as being relative to the file location of the procedure that contains the import statement. Thus, the
location of this procedure can be included with ’.’. The path has to be in quotes if it contains one or more spaces,
otherwise the program line will become invalid.
Contrary to system, user-defined, and session directories HDevelop looks only in the directory specified by an
import statement for external procedures but not recursively in its subdirectories.
Note, that an import statement is never executed and, therefore, ProcedureSource has to be evaluated
already at the procedure’s loading time. Therefore, ProcedureSource has to be a constant expression, and, in
particular, it is not possible to pass a string variable to ProcedureSource.
However, ProcedureSource may also contain environment variables, which HDevelop resolves accordingly.
Environment variables, regardless of the platform actually used, must always be denoted in Windows syntax, i.e.,
%VARIABLE%.
import neither tests whether the path ProcedureSource exists nor whether it points to a procedure library
or a directory that contains procedures at all. Therefore, import statements with nonexistent or pointless paths
nonetheless stay valid program lines, in any case.
Import paths are listed separately in HDevelop’s procedure settings. Of course, these paths can’t be modified or
deactivated from within the procedure settings. Furthermore, procedures that are available only via an import
statement are marked with a special icon.
In the program listing, import statements are displayed and must be entered without parenthesis in order to empha-
size that the line is a declaration and not an executable operator.
Parameters
. ProcedureSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
File location of the external procedures to be loaded: either a directory or a procedure library
Result
import is never executed.
Module
Foundation
Areas[Radius-1] := Area
If the operator window is used for entering the insert operator, insert must be entered into the operator combo
box as the operator name. This opens the parameter area, where the parameter Value represents the expression
that has to be evaluated to one value and assigned to the element at position Index within the tuple Input. The
parameter Result gets the name of the variable where the result has to be stored.
If the input tuple that is passed via the parameter Input and the output tuple that is passed in Result are
identical (and only in that case), the insert operator is listed and can be written in the full text editor in the
above assignment notation. In this case, the input tuple is modified and the correct operator notation for above
assignment would be:
If the Input tuple and the Result tuple differ, the input tuple will not be modified. In this case, within the
program listing only the operator notation can be used:
Result := Areas
Result[Radius-1] := Area
Please note that the operator insert will not increase the tuple if the tuple already stores a value at the passed
index. Instead of that the element at the position Index will be replaced. Hence, for the Value parameter exactly
one single value (or an expression that evaluates to one single value) must be passed.
If the passed Index parameter is beyond the current tuple size, the tuple will be increased to the required size.
The tuple elements that were inserted between the hitherto last element and the new element are undefined.
Parameters
. Input (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Tuple, where the new value has to be inserted.
Default: []
. Value (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real ; real / integer / string
Value that has to be inserted.
Default: 1
Value range: 0 ≤ Value ≤ 1000000
. Index (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Index position for new value.
Default: 0
Suggested values: Index ∈ {0, 1, 2, 3, 4, 5, 6}
Minimum increment: 1
. Result (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . real(-array) ; real / integer / string
Result tuple with inserted values.
Result
If the expression is correct insert returns 2 (H_MSG_TRUE). Otherwise, an exception is raised and an error
code returned.
Alternatives
assign
Module
Foundation
par_join ( : : ThreadID : )
Wait for subthreads that were started with the par_start qualifier.
The par_join operator is used to wait in the calling procedure for all procedures or operators that have been
started in separate subthreads by adding the par_start qualifier to the according program line. The subthreads
to wait for are identified by their thread ids that are passed to the parameter ThreadID.
HALCON 24.11.1.0
620 CHAPTER 8 CONTROL
Attention: par_start is not an operator but a qualifier that is added at the begin of the program line that has to
be executed in parallel to the calling procedure. The syntax is par_start <ThreadID> : followed by the
actual procedure or operator call.
Parameters
. ThreadID (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . thread_id(-array) ; integer
Ids of all subthreads to wait for.
Example
Result
If the values of the specified parameters are correct, par_join returns 2 (H_MSG_TRUE). Otherwise, an excep-
tion is raised and an error code returned.
Module
Foundation
repeat ( : : : )
return ( : : : )
stop ( : : : )
The stop operator stops the continuous program execution of the HDevelop program. If this happens, the PC
remains on the stop statement (instead of being placed at the next executable program line) to show the reason
for the program interruption directly even if numerous comments or other non-executable program lines follow.
The operator is equivalent to the Stop action (F9) in the menu bar. Unless parallel execution is used (via the
par_start qualifier), the program can easily be continued with the Run action (F5). See also “Parallel Execu-
tion” in the HDevelop User’s Guide.
It is possible to redefine the behavior by setting a time parameter in the preferences dialog. In this case, the
execution will not stop but continue after waiting for the specified period of time. Within this period of time, the
program can be interrupted with F9 or continued with one of the run commands. This is marked by an icon in the
first column of the program window.
Attention
This operator is not supported for code export.
Trying to continue a program that uses parallel execution after calling stop may cause non-deterministic thread
behavior or errors.
Example
Result
If the program stops at a stop statement, the return state of the previous operator is kept. If the program is
continued with the stop operator, stop always returns 2 (H_MSG_TRUE).
See also
exit
Module
Foundation
switch ( : : ControlExpression : )
HALCON 24.11.1.0
622 CHAPTER 8 CONTROL
Parameters
. ControlExpression (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer ; integer
Integer expression that determines at which case label the program execution is continued.
Example
TestStr := ''
for Index := 1 to 8 by 1
TestStr := TestStr + '<'
switch (Index)
case 1:
TestStr := TestStr + '1'
break
case 2:
TestStr := TestStr + '2'
* intentionally fall through to 3
case 3:
TestStr := TestStr + '3'
* intentionally fall through to 4
case 4:
TestStr := TestStr + '4'
break
case 5:
case 6:
* common case branch for 5 and 5
TestStr := TestStr + '56'
break
case 7:
* continue for loop
TestStr := TestStr + '7'
continue
default:
TestStr := TestStr + 'd'
break
endswitch
TestStr := TestStr + '>'
endfor
Result
If the condition is correct, switch (as an operator) returns 2 (H_MSG_TRUE). Otherwise, an exception is raised
and an error code is returned.
Alternatives
if, elseif, else
See also
case, default, endswitch, if
Module
Foundation
throw ( : : Exception : )
The operator throw provides an opportunity to throw an exception from an arbitrary place in the program. This
exception can be caught by the catch operator of a surrounding try-catch block. By this means the developer
is able to define his own specific error or exception states, for which the normal program execution is aborted in
order to continue with a specific cross-procedure exception handling, e.g., for freeing resources or restarting from
a defined state.
In such a user-defined exception a nearly arbitrary tuple can be thrown as the Exception parameter, merely the
first element of the tuple should be set to a user-defined error code ≥ 30000. If different user-defined exception
states are possible, they can be distinguished using different error codes (≥ 30000) in the first element or by using
additional elements.
In addition, with the help of the operator throw it is possible to rethrow an exception that was caught with the
operator catch. This may be sensible, for instance, if within an inner try-catch-endtry block (e.g., within
an external procedure) only specific exceptions can be handled in an adequate way and all other exceptions must
be passed to the caller, where they can be caught and handled by an outer try-catch-endtry block.
For rethrowing a caught exception, it is possible to pass the Exception tuple that was caught by the catch
operator directly to the Exception parameter of the throw operator. Furthermore, it is possible to append
arbitrary (but no iconic) user data to the Exception tuple, that can be accessed after catching the exception as
’user_data’ with the operator dev_get_exception_data:
try
...
catch(Exception)
...
UserData := ...
throw([Exception, UserData])
endtry
Attention
The export of the operators try, catch, endtry, and throw is not supported for the language C, but only for
the languages C++, C# and VisualBasic/.NET. Only the latter support throwing exceptions across procedures.
Parameters
. Exception (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . exception-array ; integer / string
Tuple returning the exception data or user defined error codes.
Result
If the values of the specified parameters are correct, throw (as operator) returns 2 (H_MSG_TRUE). Otherwise,
an exception is raised and an error code returned.
See also
try, catch, endtry, dev_get_exception_data, dev_set_check
Module
Foundation
try ( : : : )
HALCON 24.11.1.0
624 CHAPTER 8 CONTROL
a procedure that was called from the try block (directly or via other procedure calls), the procedure call and
all intermediate procedure calls that are on the call stack above the try block are immediately aborted (or, if
applicable, also after displaying an error message box).
Whether an error message box is displayed before the exception is thrown or not, is controlled by the HDe-
velop preference ’Suppress error message dialogs within try-catch blocks’ that can be
reached via Edit->Preferences->General Options->Experienced User. This message box also
offers the opportunity to stop the program execution before the exception is thrown in order to edit the possibly
erroneous operator call.
The program block that is watched for exceptions ends with the corresponding catch operator. If within the
watched try block no exception occurred, the following catch block is ignored and the program execution
continues after the corresponding endtry operator.
try-catch-endtry blocks can be nested arbitrarily into each other, within a procedure or over different proce-
dure calls, as long as any inner try-catch-endtry block lies completely either within an outer try-catch or
a catch-endtry block. If an exception is thrown within an inner try-catch block, the exception handling is
caught in the corresponding catch-endtry block. Hence, the exception is not visible for the outer try-catch
blocks unless the exception is rethrown explicitly by calling a throw operator from the catch block.
If within a HALCON operator an error occurs, an exception tuple is created and passed to the catch operator
that is responsible for catching the exception. The tuple collects information about the error such as the error code
and the error text. After catching an exception, this information can be accessed with the help of the operator
dev_get_exception_data. For more information about the passed exception data, how to access them, and
considerations about the code export, see the description of that operator. The reference of the operator throw
describes how to throw user-defined exception tuples.
HDevelop offers the opportunity to disable the handling of HALCON errors. This can be achieved by calling
the operator dev_set_check(’~give_error’) or by unchecking the check box Give Error on the dialog
Edit->Preferences->Runtime Settings. If the error handling is switched off, in case of an HALCON
error no exception is thrown but the program execution is continued as normal at the next operator. In contrast
to that, the operator throw will always throw an exception independently of the ’give_error’ setting. The same
applies if an error occurred during the evaluation of an parameter expression.
Attention
The export of the operators try, catch, endtry, and throw is not supported for the language C, but only for
the languages C++, C# and VisualBasic/.NET. Only the latter support throwing exceptions across procedures.
Example
try
read_image (Image, 'may_be_not_available')
catch (Exception)
if (Exception[0] == 5200)
dev_get_exception_data (Exception, 'error_message', ErrMsg)
set_tposition (3600, 24, 12)
write_string (3600, ErrMsg)
return ()
else
* rethrow the exception
throw ([Exception,'unknown exception in myproc'])
endif
endtry
Result
try always returns 2 (H_MSG_TRUE).
Alternatives
dev_set_check
See also
catch, endtry, throw, dev_get_exception_data, dev_set_check
Module
Foundation
until ( : : Condition : )
while ( : : Condition : )
dev_update_window ('off')
dev_close_window ()
dev_open_window (0, 0, 512, 512, 'black', WindowID)
read_image (Image, 'particle')
dev_display (Image)
stop ()
threshold (Image, Large, 110, 255)
dilation_circle (Large, LargeDilation, 7.5)
dev_display (Image)
dev_set_draw ('margin')
dev_set_line_width (3)
dev_set_color ('green')
dev_display (LargeDilation)
dev_set_draw ('fill')
stop ()
complement (LargeDilation, NotLarge)
reduce_domain (Image, NotLarge, ParticlesRed)
mean_image (ParticlesRed, Mean, 31, 31)
dyn_threshold (ParticlesRed, Mean, SmallRaw, 3, 'light')
opening_circle (SmallRaw, Small, 2.5)
connection (Small, SmallConnection)
dev_display (Image)
HALCON 24.11.1.0
626 CHAPTER 8 CONTROL
dev_set_colored (12)
dev_display (SmallConnection)
stop ()
dev_set_color ('green')
dev_display (Image)
dev_display (SmallConnection)
Button := 1
while (Button == 1)
dev_set_color ('green')
get_mbutton (WindowID, Row, Column, Button)
dev_display (Image)
dev_display (SmallConnection)
dev_set_color ('red')
select_region_point (SmallConnection, SmallSingle, Row, Column)
dev_display (SmallSingle)
NumSingle := |SmallSingle|
if (NumSingle == 1)
intensity (SmallSingle, Image, MeanGray, DeviationGray)
area_center (SmallSingle, Area, Row, Column)
dev_set_color ('yellow')
set_tposition (WindowID, Row, Column)
write_string (WindowID, 'Area='+Area+', Int='+MeanGray)
endif
endwhile
dev_set_line_width (1)
dev_update_window ('on')
Result
If the values of the specified parameters are correct, while (as operator) returns 2 (H_MSG_TRUE). Otherwise,
an exception is raised and an error code returned.
Alternatives
for, until
See also
repeat, break, continue, if, elseif, else
Module
Foundation
Deep Learning
Introduction
The term deep learning (DL) refers to a family of machine learning methods. In HALCON, the following methods
are implemented:
3D Gripping Point Detection: Detect gripping points on objects in a 3D scene. For further information please
see the chapter 3D Matching / 3D Gripping Point Detection.
A possible example for a 3D Gripping Point Detection application: A 3D scene (e.g., an RGB image and
XYZ-images) is analyzed and possible gripping points are suggested.
Anomaly Detection and Global Context Anomaly Detection Assign to each pixel the likelihood that it shows
an unknown feature. For further information please see the chapter Deep Learning / Anomaly Detection and
Global Context Anomaly Detection.
Top: A possible example for anomaly detection: A score is assigned to every pixel of the input image,
indicating how likely it shows an unknown feature, i.e., an anomaly. Bottom: A possible example for
Global Context Anomaly Detection: A score is assigned to every pixel of the input image, indicating how
likely it shows a structural or logical anomaly.
Classification: Classify an image into one class out of a given set of classes. For further information please see
the chapter Deep Learning / Classification.
627
628 CHAPTER 9 DEEP LEARNING
A possible example for Out-of-Distribution Detection for classification: The image is assigned to a class
and identified as Out-of-Distribution if applicable.
Deep 3D Matching: Detect objects in a scene and compute their 3D pose. For further information please see the
chapter 3D Matching / Deep 3D Matching.
A possible example for a Deep 3D Matching application: Images from different angles are used to detect an
object. As a result the 3D pose of the object is computed.
Deep Counting: Detect and count objects in images. For further information please see the chapter Matching /
Deep Counting.
Count: 12
A possible example for a Deep Counting application: Objects in an image are counted and the object
quantity is returned.
Deep OCR: Detect and recognize words (not just characters) in an image. For further information please see the
chapter OCR / Deep OCR.
'2'
A possible example for deep-learning-based optical character recognition: Words in an image are detected
and recognized.
Multi-label Classification: An image is assigned all contained classes from a given set of classes. For further
information please see the chapter Deep Learning / Multi-Label Classification.
apple:
lemon:
orange:
A possible example for multi-label classification: All contained classes are assigned to the image.
Object Detection and Instance Segmentation: Detect objects of the given classes and localize them within the
image. Instance segmentation is a special case of object detection, where the model also predicts distin-
guished object instances and additionally assigns for the found instances their region within the image. For
further information please see the chapter Deep Learning / Object Detection and Instance Segmentation.
'lemon'
'apple'
'apple'
'lemon'
'apple'
'apple'
Top: A possible example for object detection: Within the input image three instances are found and
assigned to a class.
Bottom: A possible example for instance segmentation: Every instance gets its individual region marked.
Semantic Segmentation and Edge Extraction: Assign a class to each pixel of an image, but different instances
of a class are not distinguished. A special case of semantic segmentation, where every pixel of the input
image is assigned to one of the two classes ’edge’ and ’background’. For further information please see the
chapter Deep Learning / Semantic Segmentation and Edge Extraction.
apple
lemon
orange
background
edges
background
Top: A possible example for semantic segmentation: Every pixel of the input image is assigned to a class.
Bottom: A possible example for edge extraction: Pixels belonging to specific edges are assigned to the
class ’edge’.
All of the deep learning methods listed above use a network for the assignment task. In HALCON they are
implemented within the general DL model, see Deep Learning / Model. The model is trained by only considering
the input and output, which is also called end-to-end learning. Basically, using images and the information, what is
visible in them, the training algorithm adjusts the model in a way to distinguish the different classes and eventually
also how to find the corresponding objects. For you, it has the nice outcome of no need for manual feature
specification. Instead you have to select and collect appropriate data.
System Requirements and License Information
For Deep learning additional prerequisites apply. Please see the requirements listed in the HALCON
“Installation Guide”, paragraph “Requirements for Deep Learning and Deep-Learning-Based Methods”.
HALCON 24.11.1.0
630 CHAPTER 9 DEEP LEARNING
Note that the required module license depends on the model type used in your application. For a detailed descrip-
tion please refer to the “Installation Guide”, paragraph “Dynamic Modules for Deep-Learning-Based
Applications”.
General Workflow
As the DL methods mentioned above differ in what they do and how they need the data, you need to know which
method is most appropriate for your specific task. Once this is clear, you need to collect a suitable amount of data,
meaning images and the information needed by the method. After that, there is a common general workflow for
all these DL methods:
Prepare the Network and the Data The network needs to be prepared for your task and your data adapted to the
specific network.
Train the Network and Evaluate the Training Progress Once your network is set up and your data prepared it
is time to train the network for your specific task.
Apply and Evaluate the Final Network Your network is trained for your task and ready to be applied. But before
deploying it in the real world you should evaluate how well the network performs on basis of your test
dataset.
Inference Phase When your network is trained and you are satisfied with its performance, you can use it for
inference on new images. Thereby the images need to be preprocessed according to the requirements of the
network (thus, in the same way as for training).
Data
The term ’data’ is used in the context of deep learning as the images and the information, what is in them. This
last information has to be provided in a way the network can understand. Not surprisingly, the different DL
methods have their own requirements concerning what information has to be provided and how. Please see the
corresponding chapters for the specific requirements.
The network further poses requirements on the images regarding the image dimensions, the gray value range, and
the type. The specific values depend on the network itself and can be queried with get_dl_model_param.
Additionally, depending on the method there are also requirements regarding the information as e.g., the bounding
boxes. To fulfill all these requirements, the data may have to be preprocessed, which can be done most conveniently
with the corresponding procedure preprocess_dl_samples.
When you train your network, the network gets adapted to its task. But at one point you will want to evaluate what
the network learned and at an even later point you will want to test the network. Therefore the dataset will be split
into three subsets which should be independent and identically distributed. In simple words, the subsets should
not be connected to each other in any way and each set contains for every class the same distribution of images.
This splitting is conveniently done by the procedure split_dl_dataset. The clearly largest subset will be
used for the retraining. We refer to this dataset as the training dataset. At a certain point the performance of the
network is evaluated to check whether it is beneficial to continue the network optimization. For this validation the
second set of data is used, the validation dataset. Even if the validation dataset is disjoint from the first one, it has
an influence on the network optimization. Therefore to test the possible predictions when the model is deployed in
the real world, the third dataset is used, the test dataset. For a representative network validation or evaluation, the
validation and test dataset should have statistically relevant data, which gives a lower bound on the amount of data
needed.
Note also, that for training the network, you best use representative images, i.e., images like the ones you want
to process later and not only ’perfect’ images, as otherwise the network may have difficulties with non-’perfect’
images.
The Network and the Training Process
In the context of deep learning, the assignments are performed by sending the input image through a network. The
output of the total network consists of a number of predictions. Such predictions are e.g., for a classification task
the confidence for each class, expressing how likely the image shows an instance of this class.
The specific network will vary, especially from one method to another. Some methods like e.g., object detection,
use a subnetwork to generate feature maps (see the explanations given below and in Deep Learning / Object
Detection and Instance Segmentation). Here, we will explain a basic Convolutional Neural Network (CNN). Such
a network consists of a certain number of layers or filters, which are arranged and connected in a specific way.
In general, any layer is a building block performing specific tasks. It can be seen as a container, which receives
input, transforms it according to a function, and returns the output to the next layer. Thereby different functions
are possible for different types of layers. Several possible examples are given in the “Solution Guide on
Classification”. Many layers or filters have weights, parameters which are also called filter weights. These
are the parameters modified during the training of a network. The output of most layers are feature maps. Thereby
the number of feature maps (the depth of the layer output) and their size (width and height) depends on the specific
layer.
apple
lemon
orange
Schema of an extract of a possible classification network. Below we show feature maps corresponding to the
layers, zoomed to a uniform size.
To train a network for a specific task, a loss function is added. There are different loss functions depending on
the task, but they all work according to the following principle. A loss function compares the prediction from
the network with the given information, what it should find in the image (and, if applicable, also where), and
penalizes deviations. Now the filter weights are updated in such a way that the loss function is minimized. Thus,
training the network for the specific tasks, one strives to minimize the loss (an error function) of the network, in the
hope of doing so will also improve the performance measure. In practice, this optimization is done by calculating
the gradient and updating the parameters of the different layers (filter weights) accordingly. This is repeated by
iterating multiple times over the training data.
There are additional parameters that influence the training, but which are not directly learned during the regular
training. These parameters have values set before starting the training. We refer to this last type of parameters as
hyperparameters in order to distinguish them from the network parameters that are optimized during training. See
the section “Setting the Training Parameters: The Hyperparameters”.
To train all filter weights from scratch a lot of resources are needed. Therefore one can take advantage from the
following observation. The first layers detect low level features like edges and curves. The feature map of the
following layers are smaller, but they represent more complex features. For a large network, the low level features
are general enough so the weights of the corresponding layers will not change much among different tasks. This
leads to a technique called transfer learning: One takes an already trained network and retrains it for a specific task,
HALCON 24.11.1.0
632 CHAPTER 9 DEEP LEARNING
benefiting from already quite suitable filter weights for the lower layers. As a result, considerably less resources
are needed. While in general the network should be more reliable when trained on a larger dataset, the amount of
data needed for retraining also depends on the complexity of the task. A basic schema for the workflow of transfer
learning is shown with the aid of classification in the figure below.
'lemon'
'lemon'
'lemon'
'apple'
'apple'
'apple'
'?'
...
...
...
...
...
...
...
...
'lemon' 'apple'
'lemon' 'apple'
0.1 0.9
we did last times, but this time only µ times as long. A visualization is given in the figure below. A too large
learning rate might result in divergence of the algorithm, a very small learning rate will take unnecessarily many
steps. Therefore, it is customary to start with a larger learning rate and potentially reduce it during training. With
a momentum µ = 0, the momentum method has no influence, so only the gradient determines the update vector.
k-1
v
k
v
k
λg
k-1
μv
k+1 k+1
λg v
k
μv
Sketch of the ’learning_rate’ and the ’momentum’ during an actualization step. The gradient step: the learning
rate λ times the gradient g (λg - dashed lines). The momentum step: the momentum µ times the previous update
vector v (µv - dotted lines). Together, they form the actual step: the update vector v (v - solid lines).
To prevent the neural networks from overfitting (see the part “Risk of Underfitting and Overfitting” below), reg-
ularization can be used. With this technique an extra term is added to the loss function. One possible type of
regularization is weight decay, for details see the documentation of train_dl_model_batch. It works by
penalizing large weights, i.e., pushing the weights towards zero. Simply put, this regularization favors simpler
models that are less likely to fit to noise in the training data and generalize better. It can be set by the hyperpa-
rameter ’weight_prior’. Choosing its value is a trade-off between the model’s ability to generalize, overfitting, and
underfitting. If ’weight_prior’ is too small the model might overfit, if it is too large, the model might loose its
ability to fit the data well because all weights are effectively zero.
With the training data and all the hyperparameters, there are many different aspects that can have an influence on
the outcome of such complex algorithms. To improve the performance of a network, generally the addition of
training data also helps. Please note, whether to gather more data is a good solution always depends also on how
easily one can do so. Usually, a small additional fraction will not noticeably change the total performance.
Supervising the training
The different DL methods have different results. Accordingly they also use different measures to determine ’how
well’ a network performs. When training a network, there are behaviors and pitfalls applying to different models,
which are described here.
Validation During Training When it comes to the validation of the network performance, it is important to note
that this is not a pure optimization problem (see the parts “The Network and the Training Process” and
“Setting the Training Parameters” above).
In order to observe the training progress, it is usually helpful to visualize a validation measure, e.g., for
the training of a classification network, the error over the samples of a batch. As the samples differ, the
difficulty of the assignment task may differ. Thus it may be that the network performs better or worse for the
samples of a given batch than for the samples of another batch. So it is normal that the validation measure
is not changing smoothly over the iterations. But in total it should improve. Adjusting the hyperparameters
’learning_rate’ and ’momentum’ can help to improve the validation measure again. The following figures
show possible scenarios.
HALCON 24.11.1.0
634 CHAPTER 9 DEEP LEARNING
Error Error
Iteration Iteration
(1) (2)
Sketch of an validation measure during training, here using the error from classification as example. (1)
General tendencies for possible outcomes with different ’learning_rate’ values, dark blue: good learning rate,
gray: very high learning rate, light blue: high learning rate, orange: low learning rate. (2) Ideal case with a
learning rate policy to reduce the ’learning_rate’ value after a given number of iterations. In orange: training
error, dark blue: validation error. The arrow marks the iteration, at which the learning rate is decreased.
Risk of Underfitting and Overfitting Underfitting occurs if the model is not able to capture the complexity of
the task. It is directly reflected in the validation measure on the training set which stays high.
Overfitting happens when the network starts to ’memorize’ training data instead of learning how to gener-
alize. This is shown by a validation measure on the training set which stays good or even improves while
the validation measure on the validation set decreases. In such a case, regularization may help. See the
explanations of the hyperparameter ’weight_prior’ in the section “Setting the Training Parameters: The Hy-
perparameters”. Note that a similar phenomenon occurs when the model capacity is too high with respect to
the data.
Error
Iteration
Sketch of a possible overfitting scenario, visible on the generalization gap (indicated with the arrow). The
error from classification serves as an example for a validation measure.
Confusion Matrix A network infers for an instance a top prediction, the class for which the network deduces
the highest affinity. When we know its ground truth class, we can compare the two class affiliations: the
predicted one and the correct one. Thereby, the instance differs between the different types of methods, while
e.g., in classification the instances are images, in semantic segmentation the instances are single pixels.
When more than two classes are distinguished, one can also reduce the comparison into binary problems.
This means, for a given class you just compare if it is the same class (positive) or any other class (negative).
For such binary classification problems the comparison is reduced to the following four possible entities
(whereof not all are applicable for every method):
A confusion matrix is a table with such comparisons. This table makes it easy to see how well the network
performs for each class. For every class it lists how many instances have been predicted into which class.
E.g., for a classifier distinguishing the three classes ’apple’, ’peach’, and ’orange’, the confusion matrix
shows how many images with ground truth class affiliation ’apple’ have been classified as ’apple’ and how
many have been classified as ’peach’ or ’orange’. Of course, this is listed for the other classes as well. This
example is shown in the figure below. In HALCON, we represent for each class the instances with this
ground truth label in a column and the instances predicted to belong to this class in a row.
TP FP
FN TN
(1) (2)
An example for a confusion matrices from classification. We see that 68 images of an ’apple’ have been
classified as such (TP), 60 images showing not an ’apple’ have been correctly classified as a ’peach’ (30) or
’pear’ (30) (TN), 0 images show a ’peach’ or a ’pear’ but have been classified as an ’apple’ (FP) and 24
images of an ’apple’ have wrongly been classified as ’peach’ (21) or ’pear’ (3) (FN). (1) A confusion matrix
for all three distinguished classes. It appears as if the network ’confuses’ apples and peaches more than all
other combinations. (2) The confusion matrix of the binary problem to better visualize the ’apple’ class.
Glossary
In the following, we describe the most important terms used in the context of deep learning:
Adam Adam (adaptive moment estimation) is a first-order gradient-based optimization algorithm for stochastic
objective functions, which computes individual adaptive learning rates. In the deep learning methods this
algorithm can be used to minimize the loss function.
anchor Anchors are fixed bounding boxes. They serve as reference boxes, with the aid of which the network
proposes bounding boxes for the objects to be localized.
annotation An annotation is the ground truth information, what a given instance in the data represents, in a way
recognizable for the network. This is e.g., the bounding box and the corresponding label for an instance in
object detection.
anomaly An anomaly means something deviating from the norm, something unknown.
backbone A backbone is a part of a pretrained classification network. Its task is to generate various feature maps,
for what reason the classifying layer has been removed.
batch size - hyperparameter ’batch_size’ The dataset is divided into smaller subsets of data, which are called
batches. The batch size determines the number of images taken into a batch and thus processed simultane-
ously.
bounding box Bounding boxes are rectangular boxes used to define a part within an image and to specify the
localization of an object within an image.
class agnostic Class agnostic means without the knowledge of the different classes.
In HALCON, we use it for reduction of overlapping predicted bounding boxes. This means, for a class
agnostic bounding box suppression the suppression of overlapping instances is done ignoring the knowledge
of classes, thus strongly overlapping instances get suppressed independently of their class.
change strategy A change strategy denotes the strategy, when and how hyperparameters are changed during the
training of a DL model.
HALCON 24.11.1.0
636 CHAPTER 9 DEEP LEARNING
class Classes are discrete categories (e.g., ’apple’, ’peach’, ’pear’) that the network distinguishes. In HALCON,
the class of an instance is given by its appropriate annotation.
classifier In the context of deep learning we refer to the term classifier as follows. The classifier takes an image
as input and returns the inferred confidence values, expressing how likely the image belongs to every distin-
guished class. E.g., the three classes ’apple’, ’peach’, and ’pear’ are distinguished. Now we give an image
of an apple to the classifier. As a result, the confidences ’apple’: 0.92, ’peach’: 0.07, and ’pear’: 0.01 could
be returned.
COCO COCO is an abbreviation for "common objects in context", a large-scale object detection, segmentation,
and captioning dataset. There is a common file format for each of the different annotation types.
confidence Confidence is a number expressing the affinity of an instance to a class. In HALCON the confidence
is the probability, given in the range of [0,1]. Alternative name: score
confusion matrix A confusion matrix is a table which compares the classes predicted by the network (top-1) with
the ground truth class affiliations. It is often used to visualize the performance of the network on a validation
or test set.
Convolutional Neural Networks (CNNs) Convolutional Neural Networks are neural networks used in deep
learning, characterized by the presence of at least one convolutional layer in the network. They are par-
ticularly successful for image classification.
data We use the term data in the context of deep learning for instances to be recognized (e.g., images) and their
appropriate information concerning the predictable characteristics (e.g., the labels in case of classification).
data augmentation Data augmentation is the generation of altered copies of samples within a dataset. This is
done in order to augment the richness of the dataset, e.g., through flipping or rotating.
dataset: training, validation, and test set With dataset we refer to the complete set of data used for a training.
The dataset is split into three, if possible disjoint, subsets:
• The training set contains the data on which the algorithm optimizes the network directly.
• The validation set contains the data to evaluate the network performance during training.
• The test set is used to test possible inferences (predictions), thus to test the performance on data without
any influence on the network optimization.
deep learning The term "deep learning" was originally used to describe the training of neural networks with
multiple hidden layers. Today it is rather used as a generic term for several different concepts in machine
learning. In HALCON, we use the term deep learning for methods using a neural network with multiple
hidden layers.
epoch In the context of deep learning, an epoch is a single training iteration over the entire training data, i.e., over
all batches. Iterations over epochs should not be confused with the iterations over single batches (e.g., within
an epoch).
errors In the context of deep learning, we refer to error when the inferred class of an instance does not match the
real class (e.g., the ground truth label in case of classification). Within HALCON, we use the term error in
deep learning when we refer to the top-1 error.
feature map A feature map is the output of a given layer
feature pyramid A feature pyramid is simply a group of feature maps, whereby every feature map origins from
another level, i.e., it is smaller than its preceding levels
head Heads are subnetworks. For certain architectures they attach on selected pyramid levels. These subnetworks
proceed information from previous parts of the total network in order to generate spatially resolved output,
e.g., for the class predictions. Thereof they generate the output of the total network and therewith constitute
the input of the losses.
hyperparameter Like every machine learning model, CNNs contain many formulas with many parameters. Dur-
ing training the model learns from the data in the sense of optimizing the parameters. However, such models
can have other, additional parameters, which are not directly learned during the regular training. These
parameters have values set before starting the training. We refer to this last type of parameters as hyperpa-
rameters in order to distinguish them from the network parameters that are optimized during training. Or
from another point of view, hyperparameters are solver-specific parameters.
Prominent examples are the initial learning rate or the batch size.
inference phase The inference phase is the stage when a trained network is applied to predict (infer) instances
(which can be the total input image or just a part of it) and eventually their localization. Unlike during the
training phase, the network is not changed anymore in the inference phase.
in-distribution In-distribution refers to data that comes from the same underlying distribution as the data on which
a model was trained. When a model encounters in-distribution data during inference , the data is similar in
terms of its statistical properties, features, and patterns to what the model has seen before during training.
intersection over union The intersection over union (IoU) is a measure to quantify the overlap of two areas. We
can determine the parts common in both areas, the intersection, as well as the united areas, the union. The
IoU is the ratio between the two areas intersection and union.
The application of this concept may differ between the methods.
label Labels are arbitrary strings used to define the class of an image. In HALCON these labels are given by the
image name (eventually followed by a combination of underscore and digits) or by the directory name, e.g.,
’apple_01.png’, ’pear.png’, ’peach/01.png’.
layer and hidden layer A layer is a building block in a neural network, thus performing specific tasks (e.g., con-
volution, pooling, etc., for further details we refer to the “Solution Guide on Classification”).
It can be seen as a container, which receives weighted input, transforms it, and returns the output to the next
layer. Input and output layers are connected to the dataset, i.e., the images or the labels, respectively. All
layers in between are called hidden layers.
learning rate - hyperparameter ’learning_rate’ The learning rate is the weighting, with which the gradient is
considered when updating the arguments of the loss function. In simple words, when we want to optimize a
function, the gradient tells us the direction in which we shall optimize and the learning rate determines how
far along this direction we step.
Alternative names: λ, step size
level The term level is used to denote within a feature pyramid network the whole group of layers, whose feature
maps have the same width and height. Thereby the input image represents level 0.
loss A loss function compares the prediction from the network with the given information, what it should find in
the image (and, if applicable, also where), and penalizes deviations. This loss function is the function we
optimize during the training process to adapt the network to a specific task.
Alternative names: objective function, cost function, utility function
momentum - hyperparameter ’momentum’ The momentum µ ∈ [0, 1) is used for the optimization of the loss
function arguments. When the loss function arguments are updated (after having calculated the gradient), a
fraction µ of the previous update vector (of the past iteration step) is added. This has the effect of damping
oscillations. We refer to the hyperparameter µ as momentum. When µ is set to 0, the momentum method has
no influence. In simple words, when we update the loss function arguments, we still remember the step we
did for the last update. Now we go a step in direction of the gradient with a length according to the learning
rate and additionally we repeat the step we did last time, but this time only µ times as long.
non-maximum suppression In object detection, non-maximum suppression is used to suppress overlapping pre-
dicted bounding boxes. When different instances overlap more than a given threshold value, only the one
with the highest confidence value is kept while the other instances, not having the maximum confidence
value, are suppressed.
Out-of-Distribution Out-of-Distribution refers to data that significantly differs from the data on which a model
was trained. When a model encounters out-of-distribution data during inference, the data’s statistical prop-
erties, features, or patterns are unfamiliar to the model, leading to potential challenges in making accurate
predictions.
HALCON 24.11.1.0
638 CHAPTER 9 DEEP LEARNING
overfitting Overfitting happens when the network starts to ’memorize’ training data instead of learning how to
find general rules for the classification. This becomes visible when the model continues to minimize error on
the training set but the error on the validation set increases. Since most neural networks have a huge amount
of weights, these networks are particularly prone to overfitting.
regularization - hyperparameter ’weight_prior’ Regularization is a technique to prevent neural networks from
overfitting by adding an extra term to the loss function. It works by penalizing large weights, i.e., pushing
the weights towards zero. Simply put, regularization favors simpler models that are less likely to fit to
noise in the training data and generalize better. In HALCON, regularization is controlled via the parameter
’weight_prior’.
Alternative names: regularization parameter, weight decay parameter, λ (note that in HALCON we use λ
for the learning rate and within formulas the symbol α for the regularization parameter).
retraining We define retraining as updating the weights of an already pretrained network, i.e., during retraining
the network learns the specific task.
Alternative names: fine-tuning.
solver The solver optimizes the network by updating the weights in a way to optimize (i.e., minimize) the loss.
stochastic gradient descent (SGD) SGD is an iterative optimization algorithm for differentiable functions. A key
feature of the SGD is to calculate the gradient only based on a single batch containing stochastically sampled
data and not all data. In the deep learning methods this algorithm can be used to calculate the gradient to
optimize (i.e., minimize) the loss function.
top-k error The classifier infers for a given image class confidences of how likely the image belongs to every
distinguished class. Thus, for an image we can sort the predicted classes according to the confidence value
the classifier assigned. The top-k error tells the ratio of predictions where the ground truth class is not
within the k predicted classes with highest probability. In the case of top-1 error, we check if the target label
matches the prediction with the highest probability. In the case of top-3 error, we check if the target label
matches one of the top 3 predictions (the 3 labels getting the highest probability for this image).
Alternative names: top-k score
transfer learning Transfer learning refers to the technique where a network is built upon the knowledge of an
already existing network. In concrete terms this means taking an already (pre)trained network with its
weights and adapt the output layer to the respective application to get your network. In HALCON, we also
see the following retraining step as a part of transfer learning.
underfitting Underfitting occurs when the model over-generalizes. In other words it is not able to describe the
complexity of the task. This is directly reflected in the error on the training set, which does not decrease
significantly.
weights In general weights are the free parameters of the network, which are altered during the training due to the
optimization of the loss. A layer with weights multiplies or adds them with its input values. In contrast to
hyperparameters, weights are optimized and thus changed during the training.
Further Information
Get an introduction to deep learning or learn about datasets for deep learning and many other topics in interactive
online courses at our MVTec Academy .
get_dl_device_param ( : : DLDeviceHandle,
GenParamName : GenParamValue )
’calibration_precisions’: Specifies the unit data types that can be used for a calibration of a deep learning model.
List of values: ’int8’.
’cast_precisions’: Specifies the unit data types that can be used for a cast of a deep learning model.
When changing the data type the calibration does not require any images.
List of values: ’float32’, ’float16’.
’conversion_supported’: Returns ’true’ if unit data types for either a calibration or a cast of a deep learning model
are available. Returns ’false’ in any other case.
’id’: The ID of the device. Within each inference engine, the IDs of its supported devices are unique. The same
holds for devices supported through HALCON.
’inference_only’: Indicates if the device can only be used to infer deep learning models (’true’) or also supports
training or gradient-based operations (’false’).
’ai_accelerator_interface’: AI Accelerator Interface (AI2 ) on which this unit DLDeviceHandle is executed. In
case the device is directly supported by HALCON, the value ’none’ is returned.
List of values: ’tensorrt’, ’openvino’, ’none’.
’info’: Dictionary containing additional information on the device.
Restriction: Only for devices that are supported via an AI2-interface.
’name’: Name of the device.
’optimize_for_inference_params’: Dictionary with default-defined conversion parameters for a calibration or cast
operation of a deep learning model. The entries can be changed.
In case no parameter applies to the set device, an empty dictionary is returned.
Restriction: Only for devices that are supported via an AI2-interface.
’precisions’: Specifies the data types that the unit supports for the weights and/or activations of a deep-learning-
based model.
List of values: ’float32’, ’float16’, ’int8’.
’settable_device_params’: Dictionary with settable device parameters.
Restriction: Only for devices that are supported via an AI2-interface.
’type’: Type of the device.
Parameters
. DLDeviceHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_device ; handle
Handle of the deep-learning-capable hardware device.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.name ; string
Name of the generic parameter.
Default: ’type’
List of values: GenParamName ∈ {’calibration_precisions’, ’cast_precisions’, ’conversion_supported’, ’id’,
’ai_accelerator_interface’, ’inference_only’, ’info’, ’name’, ’optimize_for_inference_params’, ’precisions’,
’settable_device_params’, ’type’}
. GenParamValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . attribute.name(-array) ; string / real / integer
Value of the generic parameter.
Result
If the parameters are valid, the operator get_dl_device_param returns the value 2 (H_MSG_TRUE). If nec-
essary, an exception is raised.
Execution Information
HALCON 24.11.1.0
640 CHAPTER 9 DEEP LEARNING
optimize_dl_model_for_inference ( : : DLModelHandle,
DLDeviceHandle, Precision, DLSamples,
GenParam : DLModelHandleConverted, ConversionReport )
• ’float32’
• ’float16’
• ’int8’
The parameter DLSamples specifies the samples on which the calibration is based. As a consequence they should
be representative. It is recommended to provide them from the training split. For most applications 10-20 samples
per class are sufficient to achieve good results.
Note, the samples are not needed for a pure cast operation. In this case, an empty tuple can be passed over for
DLSamples.
The parameter GenParam specifies additional, device specific parameters and their values. Which parame-
ters to set for the given DLDeviceHandle in GenParam and their default values can be queried via the
get_dl_device_param operator with the ’optimize_for_inference_params’ parameter.
Note, certain devices also expect only an empty dictionary.
The parameter ConversionReport returns a report dictionary with information about the conversion.
Attention
This operator can only be used via an AI2 -interface. Furthermore, after optimization only parameters that do not
change the underlying architecture of the model can be set for DLModelHandleConverted.
For set_dl_model_param, this includes the following parameters:
• ’device’, ’runtime’
• ’device’
• ’max_overlap’, ’min_score’
Only the AI2 -interface that was used to optimize can be set using ’device’ or the ’runtime’. Additional restrictions
may apply to these parameters to ensure that the underlying architecture of the model does not change.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Input model.
. DLDeviceHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_device(-array) ; handle
Device handle used for optimization.
. Precision (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Precision the model shall be converted to.
. DLSamples (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict-array ; handle
Samples required for optimization.
. GenParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Parameter dict for optimization.
. DLModelHandleConverted (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Output model with new precision.
. ConversionReport (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Output report for conversion.
Result
If the parameters are valid, the operator optimize_dl_model_for_inference returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
query_available_dl_devices ( : : GenParamName,
GenParamValue : DLDeviceHandles )
HALCON 24.11.1.0
642 CHAPTER 9 DEEP LEARNING
of its corresponding values that appear in GenParamValue. A parameter can have more than one value by
duplicating its name in GenParamName and adding different corresponding value in GenParamValue.
A deep-learning-capable device is either supported directly through HALCON or through an AI2 -interface.
The devices that are supported directly through HALCON are equivalent to those that can be set to a deep learning
model via set_dl_model_param using ’runtime’ = ’cpu’ or ’runtime’ = ’gpu’. HALCON provides an internal
implementation for the inference or training of a deep learning model for those devices. See Deep Learning for
more details.
Devices that are supported through the AI2 -interface can also be set to a deep learning model using
set_dl_model_param. In this case the inference is not executed by HALCON but by the device itself.
query_available_dl_devices returns a handle for each deep-learning-capable device supported through
HALCON and through an inference engine.
If a device is supported through HALCON and one or several inference engines,
query_available_dl_devices returns a handle for HALCON and for each inference engine.
GenParamName can be used to filter for the devices. All GenParamName that are gettable by
get_dl_device_param and that do not return a handle-typed value for GenParamValue are supported for
filtering. See the operator reference of get_dl_device_param for the list of gettable parameters. In addition,
the following values are supported:
’runtime’: The devices, which are directly supported by HALCON for this device type.
List of values: ’cpu’, ’gpu’.
GenParamName can have multiple entries for the same value. In this case filter combines the entries with a
logical ’or’. Please see the example code below for some examples how to use the filter.
Parameters
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Name of the generic parameter.
Default: []
List of values: GenParamName ∈ {’calibration_precisions’, ’cast_precisions’, ’conversion_supported’, ’id’,
’ai_accelerator_interface’, ’inference_only’, ’name’, ’optimize_for_inference_params’, ’precisions’,
’runtime’, ’settable_device_params’, ’type’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Value of the generic parameter.
Default: []
Suggested values: GenParamValue ∈ {}
. DLDeviceHandles (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_device(-array) ; handle
Tuple of DLDevice handles
Example
Result
If the parameters are valid, the operator query_available_dl_devices returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
This chapter explains how to use anomaly detection and Global Context Anomaly Detection based on deep learn-
ing.
With those two methods we want to detect whether or not an image contains anomalies. An anomaly means
something deviating from the norm, something unknown.
HALCON 24.11.1.0
644 CHAPTER 9 DEEP LEARNING
An anomaly detection or Global Context Anomaly Detection model learns common features of images without
anomalies. The trained model will infer, how likely an input image contains only learned features or if the image
contains something different. Latter one is interpreted as an anomaly. This inference result is returned as a gray
value image. The pixel values therein indicate how likely the corresponding pixels in the input image pixels show
an anomaly.
We differentiate between two model types that can be used:
Anomaly Detection With anomaly detection (model type ’anomaly_detection’) structural anomalies are targeted,
thus any feature that was not learned during training. This can, e.g., include scratches, cracks or contamina-
tion.
A possible example for anomaly detection: Every pixel of the input image gets assigned a value that
indicates how likely the pixel is to be an anomaly. The worm is not part of the worm-free apples the model
has seen during training and therefore its pixels get a much higher score.
Global Context Anomaly Detection Global Context Anomaly Detection (model type ’gc_anomaly_detection’)
comprises two tasks:
A possible example for Global Context Anomaly Detection: Every pixel of the input image gets assigned a
value that indicates how likely the pixel is to be an anomaly. Thereby two different types of anomalies can
be detected, structural and logical ones. Structural anomaly: One apple contains a worm, which differs
from the apples the model has seen during training. Logical anomaly: One apple is sorted among lemons.
Although the apple itself is intact, the logical constraint is violated, as the model has only seen images with
correctly sorted fruit during training.
The Global Context Anomaly Detection model consists of two subnetworks. The model can be reduced
to one of the subnetworks, in order to improve the runtime and memory consumption. This is rec-
ommended if a single subnetwork performs well enough. See the parameter ’gc_anomaly_networks’ in
get_dl_model_param for details. After setting ’gc_anomaly_networks’, the model needs to be eval-
uated again, since this parameter can change the Global Context Anomaly Detection performance signifi-
cantly.
• Local subnetwork
This subnetwork is used to detect anomalies that affect the image on a smaller, local scale. It is
designed to detect structural anomalies but can find logical anomalies as well. Thus, if an anomaly can
be recognized by analyzing single patches of an image, it is detected by the local component of the
model. See the description of the parameter ’patch_size’ in get_dl_model_param for information
on how to define the local scale of this subnetwork.
• Global subnetwork
This subnetwork is used to detect anomalies that affect the image on a large, or global scale. It is
designed to detect logical anomalies but can find structural anomalies as well. Thus, if you need to see
most or all of the image to recognize an anomaly, it is detected by the global component of the model.
Training image of an exemplary task. Apples and lemons are intact, sorted correctly, and tagged with the
correct sticker.
(1) (2)
(3) (4)
Some anomalies that can be detected with Global Context Anomaly Detection: (1) Logical anomaly, most
likely detected by the local subnetwork (wrong sticker). (2) Structural anomaly, most likely detected by local
subnetwork (wormy apple). (3) Logical anomaly, most likely detected by global subnetwork (wrong sorting).
(4) Logical anomaly, most likely detected by global subnetwork (missing apples).
General Workflow
In this paragraph, we describe the general workflow for an anomaly detection or Global Context Anomaly Detec-
tion task based on deep learning.
Preprocess the data This part is about how to preprocess your data.
1. The information content of your dataset needs to be converted. This is done by the procedure
• read_dl_dataset_anomaly.
It creates a dictionary DLDataset which serves as a database and stores all necessary information
about your data. For more information about the data and the way it is transferred, see the section
“Data” below and the chapter Deep Learning / Model.
2. Split the dataset represented by the dictionary DLDataset. This can be done using the procedure
• split_dl_dataset.
3. The network imposes several requirements on the images. These requirements (for example the image
size and gray value range) can be retrieved with
HALCON 24.11.1.0
646 CHAPTER 9 DEEP LEARNING
• get_dl_model_param.
For this you need to read the model first by using
• read_dl_model.
4. Now you can preprocess your dataset. For this, you can use the procedure
• preprocess_dl_dataset.
In case of custom preprocessing, this procedure offers guidance on the implementation.
To use this procedure, specify the preprocessing parameters as, e.g., the image size. Store all the pa-
rameter with their values in a dictionary DLPreprocessParam, for which you can use the procedure
• create_dl_preprocess_param.
We recommend to save this dictionary DLPreprocessParam in order to have access to the prepro-
cessing parameter values later during the inference phase.
1. Set the training parameters and store them in the dictionary TrainParam. This can be done using the
procedure
• create_dl_train_param.
2. Train the model. This can be done using the procedure
• train_dl_model.
The procedure
• adapts models of type ’gc_anomaly_detection’ to the image statistics of the dataset calling the
procedure normalize_dl_gc_anomaly_features,
• calls the corresponding training operator train_dl_model_anomaly_dataset
(’anomaly_detection’) or train_dl_model_batch (’gc_anomaly_detection’), respec-
tively.
The procedure expects:
• the model handle DLModelHandle
• the dictionary DLDataset containing the data information
• the dictionary TrainParam containing the training parameters
3. Normalize the network. This step is only necessary when using a Global Context Anomaly Detection
model. The anomaly scores need to be normalized by applying the procedure
• normalize_dl_gc_anomaly_scores.
This needs to be done in order to get reasonable results when applying a threshold on the anomaly
scores later (see section “Specific Parameters” below).
Evaluation of the trained model In this part, we evaluate the trained model.
Inference on new images This part covers the application of an anomaly detection or Global Context Anomaly
Detection model. For a trained model, perform the following steps:
1. Request the requirements the model imposes on the images using the operator
• get_dl_model_param
or the procedure
• create_dl_preprocess_param_from_model.
2. Set the model parameter described in the section “Model Parameters” below, using the operator
• set_dl_model_param.
3. Generate a data dictionary DLSample for each image. This can be done using the procedure
• gen_dl_samples_from_images.
4. Every image has to be preprocessed the same way as for the training. For this, you can use the proce-
dure
• preprocess_dl_samples.
When you saved the dictionary DLPreprocessParam during the preprocessing step, you can di-
rectly use it as input to specify all parameter values.
5. Apply the model using the operator
• apply_dl_model.
6. Retrieve the results from the dictionary DLResult.
Data
We distinguish between data used for training, evaluation, and inference on new images.
As a basic concept, the model handles data by dictionaries, meaning it receives the input data from a dictionary
DLSample and returns a dictionary DLResult and DLTrainResult, respectively. More information on the
data handling can be found in the chapter Deep Learning / Model.
Classes In anomaly detection and Global Context Anomaly Detection there are exactly two classes:
(1) (2)
HALCON 24.11.1.0
648 CHAPTER 9 DEEP LEARNING
Scheme of anomaly_file_name. For visibility, gray values are used to represent numbers. (1) Input
image. (2) The corresponding anomaly_file_name providing the class annotations, 0: ’ok’ (white and light
gray), 2: ’nok’ (dark gray).
Images The model poses requirements on the images, such as the dimensions, the gray value range, and the
type. The specific values depend on the model itself. See the documentation of read_dl_model for the
specific values of different models. For a read model they can be queried with get_dl_model_param.
In order to fulfill these requirements, you may have to preprocess your images. Standard preprocessing of
an entire sample, including the image, is implemented in preprocess_dl_samples. In case of custom
preprocessing these procedure offers guidance on the implementation.
Model output The training output differs depending on the used model type:
As inference and evaluation output, the model will return a dictionary DLResult for every sample. For
anomaly detection and Global Context Anomaly Detection, this dictionary includes the following extra
entries:
• anomaly_score: A score indicating how likely the entire image is to contain an anomaly. This
score is based on the pixel scores given in anomaly_image.
For Global Context Anomaly Detection, depending on the used subnetworks, the anomaly
score can also be calculated by the local (anomaly_score_local) and the global
(anomaly_score_global) subnetwork only. The anomaly_score is by default equal to the
maximum of anomaly_image. The parameter anomaly_score_tolerance can be used to
ignore a fraction of outliers in the anomaly_image when calculating the anomaly_score.
• anomaly_image: An image, where the value of each pixel indicates how likely its corresponding
pixel in the input image shows an anomaly (see the illustration below). For anomaly detection the
values are ∈ [0, 1], whereas there are no constraints for Global Context Anomaly Detection. Depending
on the used subnetworks, when using Global Context Anomaly Detection, an anomaly image can also
be calculated by the local (anomaly_image_local) or the global (anomaly_image_global)
subnetwork only.
(1) (2)
Scheme of anomaly_image. For visualization purpose, gray values are used to represent numbers. (1) The
anomaly_file_name providing the class annotations, 0: ’ok’ (white and light gray), 2: ’nok’ (dark gray) (2)
The corresponding anomaly_image.
Specific Parameters
For an anomaly detection or Global Context Anomaly Detection model, the model parameters as well as the
hyperparameters are set using set_dl_model_param. The model parameters are explained in more detail in
get_dl_model_param. As the training for an anomaly detection model is done utilizing the full dataset at
once and not batch-wise, certain parameters as e.g., ’batch_size_multiplier’ have no influence.
The model returns scores but classifies neither pixel nor image as showing an anomaly or not. For this classification,
thresholds need to be given, setting the minimum score for a pixel or image to be regarded as anomalous. You
can estimate possible thresholds using the procedure compute_dl_anomaly_thresholds. Applying these
thresholds can be done with the procedure threshold_dl_anomaly_results. As results the procedure
adds the following (threshold depending) entries into the dictionary DLResult of a sample:
anomaly_class
The predicted class of the entire image (for the given threshold). For Global Context Anomaly De-
tection, depending on the used subnetworks, the anomaly class can also be calculated by the local
(anomaly_class_local) and the global (anomaly_class_global) subnetwork only.
anomaly_class_id
ID of the predicted class of the entire image (for the given threshold). For Global Context Anomaly De-
tection, depending on the used subnetworks, the anomaly class ID can also be calculated by the local
(anomaly_class_id_local) and the global (anomaly_class_id_global) subnetwork only.
anomaly_region
Region consisting of all the pixels that are regarded as showing an anomaly (for the given threshold,
see the illustration below). For Global Context Anomaly Detection, depending on the used subnetworks,
the anomaly region can also be calculated by the local (anomaly_region_local) and the global
(anomaly_region_global) subnetwork only.
(1) (2)
Scheme of anomaly_region. For visualization purpose, gray values are used to represent numbers. (1)
The anomaly_image with the obtained pixel scores. (2) The corresponding anomaly_region.
HALCON 24.11.1.0
650 CHAPTER 9 DEEP LEARNING
(1) (2)
(1) anomaly_image after inference with ’full_domain’ (result: ’nok’), (2) anomaly_image after inference
with ’keep_domain’ (result: ’ok’).
• max_num_epochs: This parameter specifies the maximum number of epochs performed during training. In
case the criterion specified by error_threshold is reached in an earlier epoch, the training will terminate
regardless.
Restriction: max_num_epochs >=1.
Default: max_num_epochs = 30.
• error_threshold: This parameter is a termination criterion for the training. If the training error is less
than the specified error_threshold, the training terminates successfully.
Restriction:
0.0 <= error_threshold <= 1.0.
Default: error_threshold = 0.001.
• domain_ratio: This parameter determines the percentage of information of each image used for training.
Since images tend to contain an abundance of information, it is advisable to reduce its amount. Additionally,
reducing domain_ratio can decrease the time needed for training. Please note, however, sufficient infor-
mation needs to remain and therefore this value should not be set too small either. Otherwise the training
result might not be satisfactory or the training itself might even fail.
Restriction: 0.0 < domain_ratio <= 1.0.
Default: domain_ratio = 0.1.
• regularization_noise: This parameter can be set to regularize the training in order to improve ro-
bustness.
Restriction: regularization_noise >=0.0.
Default: regularization_noise = 0.0.
Attention
The operator train_dl_model_anomaly_dataset internally calls functions that might not be determin-
istic. Therefore, results from multiple calls of train_dl_model_anomaly_dataset can slightly differ,
although the same input values have been used.
System requirements: To run this operator on GPU by setting ’runtime’ to ’gpu’ (see get_dl_model_param),
cuDNN and cuBLAS are required. For further details, please refer to the “Installation Guide”, paragraph
“Requirements for Deep Learning and Deep-Learning-Based Methods”. Alternatively, this operator can also be
run on CPU by setting ’runtime’ to ’cpu’.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Deep learning model handle.
. DLSamples (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict-array ; handle
Tuple of Dictionaries with input images and corresponding information.
. DLTrainParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Parameter for training the anomaly detection model.
Default: []
. DLTrainResult (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Dictionary with the train result data.
Result
If the parameters are valid, the operator train_dl_model_anomaly_dataset returns the value 2
(H_MSG_TRUE). If necessary, an exception is raised.
Execution Information
9.2 Classification
This chapter explains how to use classification based on deep learning, both for the training and inference phases.
Classification based on deep learning is a method, in which an image gets a set of confidence values assigned.
These confidence values indicate how likely the image belongs to each of the distinguished classes. Thus, if we
regard only the top prediction, classification means to assign a specific class out of a given set of classes to an
image. This is illustrated in the following schema.
A possible classification example, in which the network distinguishes three classes. The input image gets
confidence values assigned for each of the three distinguished classes: ’apple’ 0.85, ’lemon’ 0.03, and ’orange’
0.12. The top prediction tells us, the image is recognized as ’apple’.
HALCON 24.11.1.0
652 CHAPTER 9 DEEP LEARNING
Out-of-Distribution Detection for classification is a method for identifying inputs which differ significantly from
the classes the model was trained on. It is crucial for ensuring model safety and robustness. Out-of-Distribution
Detection helps to filter potentially problematic cases for further review. This is illustrated in the following schema.
A possible example of classification with the addition of Out-of-Distribution Detection. The object in the
inference image differs significantly from the data used to train the network. In addition to the confidence values
for the three classes to be distinguished (’apple’ 0.65, ’lemon’ 0.22, and ’orange’ 0.13), the network also indicates
that the image does not belong to any of the three trained classes (Out-of-Distribution).
In order to do your specific task, thus to classify your data into the classes you want to have distinguished, the
classifier has to be trained accordingly. In HALCON, we use a technique called transfer learning (see also the
chapter Deep Learning). Hence, we provide pretrained networks, representing classifiers which have been trained
on huge amounts of labeled image data. These classifiers have been trained and tested to perform well on industrial
image classification tasks. One of these classifiers, already trained for general classifications, is now retrained for
your specific task. For this, the classifier needs to know, which classes are to be distinguished and how such
examples look like. This is represented by your dataset, i.e., your images with the corresponding ground truth
labels. More information on the data requirements can be found in the section “Data”.
In HALCON, classification with deep learning is implemented within the more general deep learning model. For
more information to the latter one, see the chapter Deep Learning / Model. For the specific system requirements in
order to apply deep learning, please refer to the HALCON “Installation Guide”.
The following sections are introductions to the general workflow needed for classification, information related to
the involved data and parameters, and explanations to the evaluation measures.
General Workflow
In this paragraph, we describe the general workflow for a classification task based on deep learning. It is subdivided
into the four parts preprocessing of the data, training of the model, evaluation of the trained model, and inference
on new images. Thereby we assume, your dataset is already labeled, see also the section “Data” below. Have a
look at the HDevelop example series classify_pill_defects_deep_learning for an application.
Preprocess the data This part is about how to preprocess your data. The single steps are also shown in the
HDevelop example classify_pill_defects_deep_learning_1_preprocess.hdev.
1. The information what is to be found in which image of your training dataset needs to be transferred.
This is done by the procedure
• read_dl_dataset_classification.
Thereby a dictionary DLDataset is created, which serves as a database and stores all necessary
information about your data. For more information about the data and the way it is transferred, see the
section “Data” below and the chapter Deep Learning / Model.
2. Split the dataset represented by the dictionary DLDataset. This can be done using the procedure
• split_dl_dataset.
The resulting split will be saved over the key split in each sample entry of DLDataset.
3. Read in a pretrained network using the operator
• read_dl_model.
This operator is likewise used when you want to read your own trained networks, after you saved them
with write_dl_model.
The network will impose several requirements on the images, as the image dimensions and the gray
value range. The default values are listed in read_dl_model. These are the values with which the
networks have been pretrained. The network architectures allow different image dimensions, which can
be set with set_dl_model_param, but depending on the network a change may make a retraining
necessary. The actually set values can be retrieved with
• get_dl_model_param.
4. Now you can preprocess your dataset. For this, you can use the procedure
• preprocess_dl_dataset.
In case of custom preprocessing, this procedure offers guidance on the implementation.
To use this procedure, specify the preprocessing parameters as e.g., the image size. Store all the param-
eters with their values in a dictionary DLPreprocessParam, wherefore you can use the procedure
• create_dl_preprocess_param.
We recommend to save this dictionary DLPreprocessParam in order to have access to the prepro-
cessing parameter values later during the inference phase.
Training of the model This part is about how to train a classifier. The single steps are also shown in the HDevelop
example classify_pill_defects_deep_learning_2_train.hdev.
1. Set the training parameters and store them in the dictionary TrainParam. These parameters include:
• the hyperparameters, for an overview see the chapter Deep Learning.
• parameters for possible data augmentation (optional).
• parameters for the evaluation during training.
• parameters for the visualization of training results.
• parameters for serialization.
This can be done using the procedure
• create_dl_train_param.
2. Train the model. This can be done using the procedure
• train_dl_model.
The procedure expects:
• the model handle DLModelHandle
• the dictionary with the data information DLDataset
• the dictionary with the training parameter TrainParam
• the information, over how many epochs the training shall run.
In case the procedure train_dl_model is used, the total loss as well as optional evaluation mea-
sures are visualized.
Evaluation of the trained model In this part we evaluate the trained classifier. The single steps are also shown in
the HDevelop example classify_pill_defects_deep_learning_3_evaluate.hdev.
Fit model to Out-of-Distribution Detection (optional) In this part, we extend the trained classifier so it
can detect out-of-distribution data. The single steps are also shown in the HDevelop example
detect_out_of_distribution_samples_for_classification.hdev.
HALCON 24.11.1.0
654 CHAPTER 9 DEEP LEARNING
Inference on new images This part covers the application of a deep-learning-based clas-
sification model. The single steps are also shown in the HDevelop example
classify_pill_defects_deep_learning_4_infer.hdev.
Data
We distinguish between data used for training and data for inference. Latter one consists of bare images. But
for the former one you already know to which class the images belong and provide this information over the
corresponding labels.
As a basic concept, the model handles data over dictionaries, meaning it receives the input data over a dictionary
DLSample and returns a dictionary DLResult and DLTrainResult, respectively. More information on the
data handling can be found in the chapter Deep Learning / Model.
Data for training and evaluation The dataset consists of images and corresponding information. They have to be
provided in a way the model can process them. Concerning the image requirements, find more information
in the section “Images” below.
The training data is used to train and evaluate a network for your specific task. With the aid of this data the
classifier can learn which classes are to be distinguished and how their representatives look like. In classi-
fication, the image is classified as a whole. Therefore, the training data consists of images and their ground
truth labels, thus the class you say this image belongs to. Note that the images should be as representative as
possible for your task. There are different ways possible, how to store and retrieve this information. How the
data has to be formatted in HALCON for a DL model is explained in the chapter Deep Learning / Model. In
short, a dictionary DLDataset serves as a database for the information needed by the training and evalua-
tion procedures. The procedure read_dl_dataset_classification supports the following sources
of the ground truth label for an image:
For training a classifier, we use a technique called transfer learning (see the chapter Deep Learning). For this,
you need less resources, but still a suitable set of data. While in general the network should be more reliable
when trained on a larger dataset, the amount of data needed for training also depends on the complexity of
the task. You also want enough training data to split it into three subsets, used for training, validation, and
testing the network. These subsets are preferably independent and identically distributed, see the section
“Data” in the chapter Deep Learning.
Images Regardless of the application, the network poses requirements on the images regarding e.g.,
the image dimensions. The specific values depend on the network itself and can be queried
with get_dl_model_param. In order to fulfill these requirements, you may have to prepro-
cess your images. Standard preprocessing is implemented in preprocess_dl_dataset and in
preprocess_dl_samples for a single sample, respectively. In case of custom preprocessing these
procedures offer guidance on the implementation.
Network output The network output depends on the task:
training As output, the operator will return a dictionary DLTrainResult with the current value of the
total loss as well as values for all other losses included in your model.
inference and evaluation As output, the network will return a dictionary DLResult for every sample. For
classification, this dictionary will include for each input image a tuple with the confidence values for
every class to be distinguished in decreasing order and a second tuple with the corresponding class IDs.
Confusion Matrix, Precision, Recall, and F-score In classification whole images are classified. As a conse-
quence, the instances of a confusion matrix are images. See the chapter Deep Learning for explanations
on confusion matrices.
You can generate a confusion matrix with the aid of the procedures gen_confusion_matrix and
gen_interactive_confusion_matrix. Thereby, the interactive procedure gives you the possibility
to select examples of a specific category, but it does not work with exported code.
From such a confusion matrix you can derive various values. The precision is the proportion of all correct
predicted positives to all predicted positives (true and false ones). Thus, it is a measure of how many positive
predictions really belong to the selected class.
TP
precision =
TP + FP
The recall, also called the "true positive rate", is the proportion of all correct predicted positives to all real
positives. Thus, it is a measure of how many samples belonging to the selected class were predicted correctly
as positives.
TP
recall =
TP + FN
A classifier with high recall but low precision finds most members of positives (thus members of the class),
but at the cost of also classifying many negatives as member of the class. A classifier with high precision but
low recall is just the opposite, classifying only few samples as positives, but most of these predictions are
correct. An ideal classifier with high precision and high recall will classify many samples as positive with a
high accuracy.
To represent this with a single number, we compute the F1-score, the harmonic mean of precision and recall.
Thus, it is a measure of the classifier’s accuracy.
precision ∗ recall
F1-score = 2 ∗
precision + recall
For the example from the confusion matrix shown in Deep Learning we get for the class ’ap-
ple’ the values precision: 1.00 (= 68/(68+0+0)), recall: 0.74 (= 68/(68+21+3)), and F1-score: 0.85
(=2*(1.00*0.74)/(1.00+0.74)).
HALCON 24.11.1.0
656 CHAPTER 9 DEEP LEARNING
For fit_dl_out_of_distribution to work properly, it is important that DLDataset is the same dataset
with the same split and preprocessing parameters, as the one used for training DLModelHandle. It is crucial
that the provided dataset DLDataset contains diverse and sufficient samples for each class to ensure reliable
Out-of-Distribution Detection. If the dataset is too small or lacks variation, fit_dl_out_of_distribution
may return an error. In such cases, additional training data should be added to the dataset.
fit_dl_out_of_distribution can be applied to any classification model supported by HALCON. For
models created using Deep Learning / Framework operators or read from an ONNX model file, Out-of-Distribution
Detection compatibility may vary depending on the architecture.
The performance of the model for Out-of-Distribution Detection can be evaluated using the procedure
evaluate_dl_model. To evaluate the model on out-of-distribution data, these can be added to the
DLDataset using the procedure add_dl_out_of_distribution_data, allowing for testing whether the
model can accurately separate in-distribution from out-of-distribution data. Adjustments to the ’ood_threshold’
will affect evaluation results. Therefore, it is recommended to re-evaluate the model after making such changes.
GenParam is a dictionary for setting generic parameters. Currently no generic parameters are supported.
Attention
If fit_dl_out_of_distribution is called for a model that has already been extended with Out-of-
Distribution Detection, the previous internal calculations are discarded and the model is adapted anew
Certain modifications to the model, such as changing the number of classes or continuing training of the model,
cannot be performed once the model has been extended for Out-of-Distribution Detection. To make such changes
possible, the model internal Out-of-Distribution Detection must first be removed from the model using the pa-
rameter ’clear_ood’ in set_dl_model_param. Once removed, fit_dl_out_of_distribution can be
called again to re-enable Out-of-Distribution Detection.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of a deep learning classification model.
. DLDataset (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Dataset, which was used for training the model.
. GenParam (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dict ; handle
Dictionary for generic parameters.
Execution Information
Possible Predecessors
read_dl_model
Module
Deep Learning Professional
9.3 Framework
’relu’: Rectified linear unit (ReLU) activation. By setting a specific ReLU parameter, another type can be specified
instead of the standard ReLU:
• Standard ReLU, defined as follows:
0 if x ≤ 0,
ReLU (x) := x if 0 < x ≤ β,
β else.
Setting the generic parameter ’upper_bound’ will result in a bounded ReLU and determines the value of
β.
• Leaky ReLU:, defined as follows:
αx if x ≤ 0,
ReLU (x) :=
x else.
Setting the generic parameter ’leaky_relu_alpha’ results in a leaky ReLU and determines the value α.
’sigmoid’: Sigmoid activation, which is defined as follows.
1
Sigmoid(x) :=
1 + e−x
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’upper_bound’: Float value defining an upper bound for a rectified linear unit. If the activation layer is part of
a model which has been created using create_dl_model, the upper bound can be unset. To do so, use
set_dl_model_layer_param and set an empty tuple for ’upper_bound’.
Default: []
HALCON 24.11.1.0
658 CHAPTER 9 DEEP LEARNING
Certain parameters of layers created using this operator create_dl_layer_activation can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ActivationType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Activation type.
Default: ’relu’
List of values: ActivationType ∈ {’relu’, ’sigmoid’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’, ’upper_bound’, ’leaky_relu_alpha’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerActivation (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Activation layer.
Execution Information
create_dl_layer_batch_normalization ( : : DLLayerInput,
LayerName, Momentum, Epsilon, Activation, GenParamName,
GenParamValue : DLLayerBatchNorm )
To affect the mean and variance values you can set the following options for Momentum:
Given number: For example: 0.9. This is the default and recommended option.
Restriction: 0 ≤ Momentum < 1
’auto’: Combines mean and variance values by a cumulative moving average. This is only recommended in case
the parameters of all previous layers in the network are frozen, i.e., have a learning rate of 0.
’freeze’: Stops the adjustment of the mean and variance and their values stay fixed. In this case, the mean and vari-
ance are used during training for normalizing a batch, analogously to how the batch normalization operates
during inference. The parameters of the linear scale and shift transformation, however, remain learnable.
Epsilon is a small offset to the variance and used to control the numerical stability. Usually its default value
should be adequate.
The parameter DLLayerInput determines the feeding input layer.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Activation determines whether an activation is performed after the batch normalization in order
to optimize the runtime performance.
’bias_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its val-
ues.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’const’
’bias_filler_const_val’: Constant value.
Restriction: ’bias_filler’ must be set to ’const’.
Default: 0
’bias_filler_variance_norm’: See create_dl_layer_convolution for a detailed explanation of this pa-
rameter and its values.
List of values: ’norm_out’, ’norm_in’, ’norm_average’, or constant value (in combination with ’bias_filler’
= ’msra’).
Default: ’norm_out’
HALCON 24.11.1.0
660 CHAPTER 9 DEEP LEARNING
’bias_term’: Determines whether the created batch normalization layer has a bias term (’true’) or not (’false’).
Default: ’true’
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’learning_rate_multiplier’: Multiplier for the learning rate for this layer that is used during training. If ’learn-
ing_rate_multiplier’ is set to 0.0, the layer is skipped during training.
Default: 1.0
’learning_rate_multiplier_bias’: Multiplier for the learning rate of the bias term. The total bias learning rate is
the product of ’learning_rate_multiplier_bias’ and ’learning_rate_multiplier’.
Default: 1.0
’upper_bound’: Float value defining an upper bound for a rectified linear unit. If the activation layer is part of a
model, which has been created using create_dl_model, the upper bound can be unset. To do so, use
set_dl_model_layer_param and set an empty tuple for ’upper_bound’.
Default: []
’weight_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its
values.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’const’
’weight_filler_const_val’: See create_dl_layer_convolution for a detailed explanation of this parame-
ter and its values.
Default: 1.0
’weight_filler_variance_norm’: See create_dl_layer_convolution for a detailed explanation of this pa-
rameter and its values.
List of values: ’norm_in’, ’norm_out’, ’norm_average’, or constant value (in combination with
’weight_filler’ = ’msra’).
Default: ’norm_in’
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Momentum (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string / real
Momentum.
Default: 0.9
List of values: Momentum ∈ {0.9, 0.99, 0.999, ’auto’, ’freeze’}
. Epsilon (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Variance offset.
Default: 0.0001
. Activation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Optional activation function.
Default: ’none’
List of values: Activation ∈ {’none’, ’relu’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’bias_filler’, ’bias_filler_variance_norm’, ’bias_filler_const_val’,
’bias_term’, ’is_inference_output’, ’learning_rate_multiplier’, ’learning_rate_multiplier_bias’,
’upper_bound’, ’weight_filler’, ’weight_filler_variance_norm’, ’weight_filler_const_val’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’xavier’, ’msra’, ’const’, ’nearest_neighbor’, ’bilinear’, ’norm_in’,
’norm_out’, ’norm_average’, ’true’, ’false’, 1.0, 0.9, 0.0}
. DLLayerBatchNorm (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Batch normalization layer.
Example
HALCON 24.11.1.0
662 CHAPTER 9 DEEP LEARNING
Execution Information
create_dl_layer_class_id_conversion ( : : DLLayerInput,
LayerName, ConversionMode, GenParamName,
GenParamValue : DLLayerClassIdConversion )
• ’from_class_id’: Convert target / output class IDs into internal IDs. This mode is typically used after a target
input layer.
• ’to_class_id’: Convert internal IDs into target / output class IDs. This mode is typically used after an infer-
ence output layer.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ConversionMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Direction of the class ID conversion.
Default: ’from_class_id’
List of values: ConversionMode ∈ {’from_class_id’, ’to_class_id’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerClassIdConversion (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Class IDs conversion layer.
Example
HALCON 24.11.1.0
664 CHAPTER 9 DEEP LEARNING
Execution Information
Note that all non-concatenated dimensions must be equal for all input data tensors.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_concat can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
HALCON 24.11.1.0
666 CHAPTER 9 DEEP LEARNING
Parameters
. DLLayerInputs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Feeding input layers.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Axis (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Dimension along which the input layers are concatenated.
Default: ’depth’
List of values: Axis ∈ {’batch’, ’batch_interleaved’, ’depth’, ’height’, ’width’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerConcat (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Concatenation layer.
Execution Information
• ’half_kernel_size’: The number of appended pixels depends on the specified KernelSize. More pre-
cisely, it is calculated as bKernelSize/2c, where for the padding on the left / right border the value of
KernelSize in dimension width is regarded and for the padding on the upper / lower border the value of
KernelSize in height.
• ’none’: No pixels are appended.
• Number of pixels: Specify the number of pixels appended on each border. To do so, the following tuple
lengths are supported:
– Single number: Padding in all four directions left/right/top/bottom.
– Two numbers: Padding in left/right and top/bottom: [l/r, t/b].
– Four numbers: Padding on left, right, top, bottom side: [l,r,t,b].
Restriction: ’runtime’ ’gpu’ does not support asymmetric padding, i.e., the padding values for the left
and right side must be equal, as well as the padding values for the top and bottom side.
Restriction: The integer padding values must be smaller than the value set for KernelSize in the corre-
sponding dimension.
input_dim + 2(padding_begin + padding_end)
output_dim =
Stride
KernelSize + (KernelSize − 1)(Dilation − 1)
− +1
Stride
Thereby we use the following values: output_dim: output width/height, input_dim: input width/height,
padding_begin: number of pixels added to the left/top of the input image, and padding_end: number of pix-
els added to the right/bottom of the input image.
The parameter Activation determines whether an activation is performed after the convolution in order to
optimize the runtime performance. The following values are supported:
We refer to the “Solution Guide on Classification” for more general information about the convo-
lution layer and the reference given below for more detailed information about the arithmetic of the layer.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
HALCON 24.11.1.0
668 CHAPTER 9 DEEP LEARNING
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’learning_rate_multiplier’: Multiplier for the learning rate for this layer that is used during training. If ’learn-
ing_rate_multiplier’ is set to 0.0, the layer is skipped during training.
Default: 1.0
’learning_rate_multiplier_bias’: Multiplier for the learning rate of the bias term. The total bias learning rate is
the product of ’learning_rate_multiplier_bias’ and ’learning_rate_multiplier’.
Default: 1.0
’upper_bound’: Float value, which defines the upper bound for ReLU. To unset the upper bound, set ’up-
per_bound’ to an empty tuple.
Default: []
’weight_filler’: This parameter defines the mode how the weights are initialized. The following values are sup-
ported:
• ’const’: The weights are filled with constant values.
• ’msra’: The weights are drawn from a Gaussian distribution.
• ’xavier’: The weights are drawn from a uniform distribution.
Default: ’xavier’
’weight_filler_const_val’: Specifies the constant weight initialization value.
Restriction: Only applied if ’weight_filler’ = ’const’.
Default: 0.5
’weight_filler_variance_norm’: This parameter determines the value range for ’weight_filler’. The following val-
ues are supported:
• ’norm_average’: the values are based on the average of the input and output size
• ’norm_in’: the values are based on the input size
• ’norm_out’: the values are based on the output size.
Default: ’norm_in’
Certain parameters of layers created using create_dl_layer_convolution can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. KernelSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Width and height of the filter kernels.
Default: 3
. Dilation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Amount of filter dilation for width and height.
Default: 1
. Stride (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; integer
Amount of filter shift in width and height direction.
Default: 1
. NumKernel (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of filter kernels.
Default: 64
. Groups (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Number of filter groups.
Default: 1
. Padding (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string / integer
Padding type or specific padding size.
Default: ’none’
List of values: Padding ∈ {’none’, ’half_kernel_size’, [all], [width,height], [left,right,top,bottom]}
Suggested values: Padding ∈ {’none’, ’half_kernel_size’}
. Activation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string
Enable optional ReLU or sigmoid activations.
Default: ’none’
List of values: Activation ∈ {’none’, ’relu’, ’sigmoid’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’weight_filler’, ’weight_filler_variance_norm’,
’weight_filler_const_val’, ’bias_filler’, ’bias_filler_variance_norm’, ’bias_filler_const_val’, ’bias_term’,
’is_inference_output’, ’learning_rate_multiplier’, ’learning_rate_multiplier_bias’, ’upper_bound’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’xavier’, ’msra’, ’const’, ’nearest_neighbor’, ’bilinear’, ’norm_in’,
’norm_out’, ’norm_average’, ’true’, ’false’, 1.0, 0.9, 0.0}
HALCON 24.11.1.0
670 CHAPTER 9 DEEP LEARNING
’bias_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its val-
ues.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’const’
’bias_filler_const_val’: Constant value if ’bias_filler’ = ’const’.
Default: 0
’bias_filler_variance_norm’: See create_dl_layer_convolution for a detailed explanation of this pa-
rameter and its values.
List of values: ’norm_out’, ’norm_in’, ’norm_average’, or constant value (in combination with ’bias_filler’
= ’msra’).
Default: ’norm_out’
’bias_term’: Determines whether the created dense layer has a bias term (’true’) or not (’false’).
Default: ’true’
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’learning_rate_multiplier’: Multiplier for the learning rate for this layer that is used during training. If ’learn-
ing_rate_multiplier’ is set to 0.0, the layer is skipped during training.
Default: 1.0
’learning_rate_multiplier_bias’: Multiplier for the learning rate of the bias term. The total bias learning rate is
the product of ’learning_rate_multiplier_bias’ and ’learning_rate_multiplier’.
Default: 1.0
’weight_filler’: See create_dl_layer_convolution for a detailed explanation of this parameter and its
values.
List of values: ’xavier’, ’msra’, ’const’.
Default: ’xavier’
Certain parameters of layers created using create_dl_layer_dense can be set and retrieved us-
ing further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
HALCON 24.11.1.0
672 CHAPTER 9 DEEP LEARNING
Module
Deep Learning Professional
Note, these parameters only need to be set in case such an output layer is requested (see DepthMaxMode).
The parameter LayerName defines the name of the output layer(s) depending on DepthMaxMode:
Note that if creating a model using create_dl_model each layer of the created network must have a unique
name.
The mode DepthMaxMode indicates which depth max value is actually returned as output. The following values
are supported:
’argmax’: newline The depth index of the maximal value is returned in DLLayerDepthMaxArg.
’value’: newline The maximal value itself is returned in DLLayerDepthMaxValue.
’argmax_and_value’: newline Both are returned, the depth index of the maximal value in the output layer
DLLayerDepthMaxArg, and the maximal value itself in the output layer DLLayerDepthMaxValue.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_depth_max can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. DepthMaxMode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode to indicate type of return value.
Default: ’argmax’
List of values: DepthMaxMode ∈ {’argmax’, ’value’, ’argmax_and_value’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerDepthMaxArg (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Optional, depth max layer with mode ’argmax’.
. DLLayerDepthMaxValue (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Optional, depth max layer with mode ’value’.
Execution Information
HALCON 24.11.1.0
674 CHAPTER 9 DEEP LEARNING
The operator create_dl_layer_depth_to_space creates a depth to space layer whose handle is returned
in DLLayerDepthToSpace.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
This layer rearranges the elements of the feeding tensor of shape (N, C ∗ r2 , H, W ) to a tensor of shape (N, C, H ∗
r, W ∗ r). Thereby r can be considered an upscale factor, which is set with BlockSize.
The output element (depth, row, col) is mapped from the input element (depth ∗ r2 + (row%r) ∗ r +
col%r, row/r, col/r).
With Mode the ordering in the output tensor is set. Currently only the ’column_row_depth’ order described above
is available.
Certain parameters of layers created using this operator create_dl_layer_depth_to_space
can be set and retrieved using further operators. The following tables give an overview, which
parameters can be set using set_dl_model_layer_param and which ones can be re-
trieved using get_dl_model_layer_param or get_dl_layer_param. Note, the operators
set_dl_model_layer_param and get_dl_model_layer_param require a model created by
create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. BlockSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Block size (i.e., upscale factor).
Default: 3
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Ordering mode.
Default: ’column_row_depth’
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerDepthToSpace (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Depth to space layer.
Example
Execution Information
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_dropout can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
HALCON 24.11.1.0
676 CHAPTER 9 DEEP LEARNING
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Probability (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Probability.
Default: 0.5
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerDropOut (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
DropOut layer.
Execution Information
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter Operation specifies the operation that is applied. Depending on Operation, the layer supports
implicit broadcasting. I.e., if one of the shape dimensions (batch_size, depth, height, width) of the
second or any of the following input tensors is 1, the values are implicitly multiplied along that dimension to
match the shape of the first input. The supported values are:
The optional parameter Coefficients determines a weighting coefficient for every input tensor. The number of
values in Coefficients must match the number of feeding layers in DLLayerInputs. Set Coefficients
equal to [] if no coefficients shall be used in the element-wise operation.
Restriction: No coefficients can be set for Operation = ’product’.
Example: for Operation = ’sum’, the i-th element of the output data tensor is given by
N
X −1
output[i] = Coefficients[n] · DLLayerInputsn [i],
n=0
’div_eps’: Small scalar value that is added to the elements of the denominator to avoid a division by zero (for
Operation = ’division’).
Default: 1e-10
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
HALCON 24.11.1.0
678 CHAPTER 9 DEEP LEARNING
Parameters
. DLLayerInputs (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Feeding input layers.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Element-wise operations.
Default: ’sum’
List of values: Operation ∈ {’division’, ’maximum’, ’minimum’, ’product’, ’sum’}
. Coefficients (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real
Optional input tensor coefficients.
Default: []
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerElementWise (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Elementwise layer.
Execution Information
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_identity can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerIdentity (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Identity layer.
Example
Execution Information
HALCON 24.11.1.0
680 CHAPTER 9 DEEP LEARNING
The operator create_dl_layer_input creates an input layer with spatial dimensions given by Shape whose
handle is returned in DLLayerInput.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
When the created model is applied using e.g., apply_dl_model or train_dl_model_batch, it must be
possible to map an input with its corresponding input layer. Operators applying a model expect a feeding dictionary
DLSample, see Deep Learning / Model. The mentioned mapping is done using dictionary entries, where the key
matches the input layer name. Thus, for an input of this layer a sample dictionary will need an entry with the key
LayerName (except if the ’input_type’ is set to ’constant’, see below).
The parameter Shape defines the shape of the input values (the values given in the feeding dictionary DLSample)
and must be a tuple of length three, containing width, height, and depth of the input. The tuple values must
be given as integer values and have have different meaning depending on the input type:
• for an input image the layer Shape defines the image size. Images shall be given with type real (for
information on image types see Image).
• for an input tuple its length will need to match the product of the individual values in Shape, i.e., width ×
height × depth.
Tuple values are distributed along the column- (width), row- (height), and depth-axes in this order.
Input tuple values can be given either as integer or real.
The batch size has to be set later with set_dl_model_param, once the model has been created by
create_dl_model.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’allow_smaller_tuple’: For tuple inputs, setting ’allow_smaller_tuple’ to ’true’ allows to have an input tuple with
less values than the total dimension given by Shape. E.g., this can be the case if an input corresponds to the
number of objects within one image and the number of objects changes from image to image. If fewer than
the maximum number of values given by the total dimension of Shape are present, the remaining values are
set to zero.
Shape should be set such that it fits the maximum expected length. For the example above this would be the
maximum number of objects within one image present in the whole dataset.
Default: ’false’.
’const_val’: Constant output value.
Restriction:
Only an integer or float is settable. This value is only settable or gettable if ’input_type’ is set to ’constant’.
Default: 0.0.
’input_type’: Defines the type of input that is expected. The following values are possible:
’default’: The layer expects a number of input images corresponding to the batch size.
’region_to_bin’: The layer expects a tuple of regions as input and internally converts it to a binary image
where each region is encoded in one depth channel. Regions reaching out of the given dimensions are
clipped to the width and height given by Shape. The maximum number of regions is defined by the
depth of Shape. If fewer than the maximum number of regions are given, the output is filled up with
empty (zero) images. For example, this can be the case if the regions are corresponding to objects within
an image and the number of objects changes from image to image.
’constant’: The layer does not expect any key value pair in the input dictionary. Instead all entries within the
output of this layer are filled with the value given by ’const_val’.
Default: ’default’.
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using create_dl_layer_input can be set and retrieved us-
ing further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Shape (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Dimensions of the input (width, height, depth).
Default: [224,224,3]
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’allow_smaller_tuple’, ’const_val’, ’input_type’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {0.0, ’constant’, ’default’, ’false’, ’region_to_bin’, ’true’}
. DLLayerInput (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
Example
HALCON 24.11.1.0
682 CHAPTER 9 DEEP LEARNING
Execution Information
create_dl_layer_loss_cross_entropy ( : : DLLayerInput,
DLLayerTarget, DLLayerWeights, LayerName, LossWeight,
GenParamName, GenParamValue : DLLayerLossCrossEntropy )
N −1
1 X
Lcross_entropy (x, t, w) := − wi · xi [ti ],
W i=0
where the input x consists of one prediction vector xi for each pixel, the target t and weight w consist of
PN −1
one value ti and wi for each input pixel, N is the number of pixels and W = i=0 wi is the sum over all
weights.
• DLLayerInput: Specifies the prediction (e.g., a softmax layer, commonly with logarithmized results).
• DLLayerTarget: Specifies the target sequences (originating from the ground truth information).
• DLLayerWeights: Specifies the weight sequences. This parameter is optional. If an empty tuple [] is
passed for all values the weighting factor 1.0 is used.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter LossWeight determines the scalar weight factor with which the loss, calculated in this layer, is
multiplied. This parameter can be used to specify the contribution of the cross entropy loss to the overall network
loss in case multiple loss layers are used.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Overall loss weight if there are multiple losses in the network.
Default: 1.0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
HALCON 24.11.1.0
684 CHAPTER 9 DEEP LEARNING
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The CTC loss is typically applied in a CNN as follows. The input sequence is expected to be encoded in some CNN
layer with the output shape [width: T, height: 1, depth: C]. Typically the end of a large fully convolutional
classifier is pooled in height down to 1 with an average pooling layer. It is important that the last layer is
wide enough to hold enough information. In order to obtain the sequence prediction in the output depth a 1x1
convolutional layer is added after the pooling with the number of kernels set to C. In this use case the CTC loss
obtains this convolutional layer as input layer DLLayerInput. The width of the input layer determines the
maximum output sequence of the model.
The CTC loss can be applied to a batch of input items with differing input and target sequence lengths. T and S
are the maximum lengths. In DLLayerInputLengths and DLLayerTargetLengths the individual length
of each item in a batch needs to be specified.
Restrictions
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_loss_ctc can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
HALCON 24.11.1.0
686 CHAPTER 9 DEEP LEARNING
* Model creation
create_dl_layer_input ('input', [T,1,1], [], [], Input)
create_dl_layer_dense (Input, 'dense', T*C, [], [], DLLayerDense)
create_dl_layer_reshape (DLLayerDense, 'dense_reshape', [T,1,C], [], [],\
ConvFinal)
* Training part
* Inference part
create_dl_layer_softmax (ConvFinal, 'softmax', [], [], DLLayerSoftMax)
create_dl_layer_depth_max (DLLayerSoftMax, 'prediction', 'argmax', [], [],\
DLLayerDepthMaxArg, _)
* Setting a seed because the weights of the network are randomly initialized
set_system ('seed_rand', 35)
PredictedSequence := []
dev_inspect_ctrl ([InputSequence, TargetSequence, CTCLoss, PredictedValues,\
PredictedSequence])
MaxIterations:= 15
for I := 0 to MaxIterations by 1
apply_dl_model (DLModel, InputSample, ['prediction','softmax'], \
DLResultBatch)
get_dict_object (Softmax, DLResultBatch, 'softmax')
get_dict_object (Prediction, DLResultBatch, 'prediction')
PredictedValues := []
for t := 0 to T-1 by 1
get_grayval (Prediction, 0, t, PredictionValue)
PredictedValues := [PredictedValues, PredictionValue]
endfor
train_dl_model_batch (DLModel, InputSample, DLTrainResult)
Execution Information
References
Graves Alex et al., "Connectionist temporal classification: labelling unsegmented sequence data with recurrent
neural networks." Proceedings of the 23rd international conference on Machine learning. 2006.
HALCON 24.11.1.0
688 CHAPTER 9 DEEP LEARNING
Module
Deep Learning Professional
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter LossWeight is an overall loss weight if there are multiple losses in the network.
The parameter DistanceType determines which distance measure is applied. Currently, ’l2’ and ’l1’ are imple-
mented. Depending on the generic parameter ’reduce’ this results in
Thus DLLayerInput, DLLayerTarget and DLLayerWeights should have the same size. Setting the
weights in DLLayerWeights to 1 will result in a loss normalized over the number of elements.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’reduce’: Determines whether the output of the layer is reduced:
• ’true’: The output is reduced to a scalar.
• ’false’: The output of the layer is a tensor, where each element is a ’per-pixel’ loss (squared differences).
Default: ’true’.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. DistanceType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of distance.
Default: ’l2’
List of values: DistanceType ∈ {’l2’, ’l1’}
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Loss weight. Applies to all losses, if several losses occur in the network.
Default: 1.0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’, ’reduce’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
HALCON 24.11.1.0
690 CHAPTER 9 DEEP LEARNING
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The parameter LossWeight is a overall loss weight if there are multiple losses in the network.
The parameter Gamma is the exponent of the focal factor.
The parameter ClassWeights defines class specific weights. All loss contributions of foreground samples
of a class are weighted with the given factor. The background samples are weighted by 1 - ClassWeights.
Typically, this is set to 1.0/(Number of samples of the class). Note, the length of this array has to be either 1, then
its broadcasted to the number of classes, or it has to correspond to the number of classes. The default value []
corresponds to a factor of 0.5 for all classes. Note, if the number of classes are changed on a network then the
number of class specific weights are also adapted and reset with the default value 0.5 for each class.
The parameter Type sets the focal loss options:
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_loss_focal can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. DLLayerNormalization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Normalization layer.
Default: []
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Overall loss weight if there are multiple losses in the network.
Default: 1.0
. Gamma (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Exponent of the focal factor.
Default: 2.0
. ClassWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; real / integer
Class specific weight.
Default: []
. Type (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Focal loss type.
Default: ’focal_binary’
List of values: Type ∈ {’focal_binary’, ’sigmoid_focal_binary’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLossFocal (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Focal loss layer.
Execution Information
HALCON 24.11.1.0
692 CHAPTER 9 DEEP LEARNING
N −1
α X
LHuber (x, t, w, n) := wi l(xi − ti ), with
n i=0
0.5y 2 /β
if |y| < β
l(y) :=
|y| − 0.5β else.
The underlying data tensors are assumed to be of the same shape with a total number of N elements.
The parameter DLLayerNormalization can be used to determine the normalization factor n. If
DLLayerNormalization is set to an empty tuple, the sum over all weights is used for the normalization
n.
The parameter LossWeight determines the scalar weight factor α.
The parameter Beta sets the value for β in the formula. If Beta is set to 0, the Huber loss is equal to an L1-loss.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_loss_huber can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer.
. DLLayerTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Target layer.
. DLLayerWeights (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Weights layer.
. DLLayerNormalization (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Normalization layer.
Default: []
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LossWeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Scalar weight factor.
Default: 1.0
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Beta value in the loss-defining formula.
Default: 1.1
Restriction: Beta >= 0
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLossHuber (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Huber loss layer.
Execution Information
Module
Deep Learning Professional
HALCON 24.11.1.0
694 CHAPTER 9 DEEP LEARNING
−Beta
min(N −1,c+n/2)
Alpha X
LRN (xc ) = xc · K + x2c0 ,
n
c0 =max(0,c−n/2)
where n is the size of the local window given by LocalSize, N is the total number of channels, Alpha is the
scaling parameter (used as a normalization constant), Beta is the exponent used as a contrast constant, and K is a
constant summand, which is used to avoid any singularities.
The parameter DLLayerInput determines the feeding input layer and expects the layer handle as value.
The parameter LayerName sets an individual layer name. Note that if creating a model using
create_dl_model each layer of the created network must have a unique name.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_lrn can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. LocalSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Size of the local window.
Default: 5
. Alpha (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Scaling factor in the LRN formula.
Default: 0.0001
. Beta (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Exponent in the LRN formula.
Default: 0.75
. K (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real
Constant summand in the LRN formula.
Default: 1.0
. NormRegion (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Normalization dimension.
Default: ’across_channels’
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerLRN (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
LRN layer.
Execution Information
HALCON 24.11.1.0
696 CHAPTER 9 DEEP LEARNING
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’num_trainable_params’: Number of trainable parameters (weights and biases) of the layer.
’transpose_a’: Matrices of input DLLayerA are transposed: C = AT · B.
Default: ’false’
’transpose_b’: Matrices of input DLLayerB are transposed: C = A · B T .
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_matmul can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerA (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer A.
. DLLayerB (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Input layer B.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’, ’num_trainable_params’, ’transpose_a’,
’transpose_b’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerMatMul (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
MatMul layer.
Execution Information
Module
Deep Learning Professional
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Permutation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Order of the permuted axes.
Default: [0,1,2,3]
HALCON 24.11.1.0
698 CHAPTER 9 DEEP LEARNING
Execution Information
The parameter KernelSize specifies the filter kernel in the dimensions width and height.
The parameter Stride specifies how the filter is shifted.
The values for KernelSize and Stride can be set as
The parameter Padding determines the padding, thus how many pixels with value 0 are appended on the border
of the processed input image. Supported values are:
• ’half_kernel_size’: The number of appended pixels depends on the specifies KernelSize. More pre-
cisely, it is calculated as bKernelSize/2c, where for the padding on the left / right border the value of
KernelSize in dimension width is regarded and for the padding on the upper / lower border the value of
KernelSize in height.
• ’implicit’: No pixels are appended on the left or on the top of the input image. The number of pixels appended
on the right or lower border of the input image is Stride − (input_dim − KernelSize)%Stride, or
zero if the kernel size is a divisor of the input dimension. input_dim stands for the input width or height.
• ’none’: No pixels are appended.
• Number of pixels: Specify the number of pixels appended on each border. To do so, the following tuple
lengths are supported:
– Single number: Padding in all four directions left/right/top/bottom.
– Two numbers: Padding in left/right and top/bottom: [l/r, t/b].
– Four numbers: Padding on left, right, top, bottom side: [l,r,t,b].
Restriction: ’runtime’ ’gpu’ does not support asymmetric padding, i.e., the padding values for the left
and right side must be equal, as well as the padding values for the top and bottom side.
Restriction: The integer padding values must be smaller than the value set for KernelSize in the corre-
sponding dimension.
input_dim + padding_begin + padding_end − KernelSize
output_dim = +1
Stride
Thereby we use the following values: output_dim: output width, input_dim: input width, padding_begin:
number of pixels added to the left/top of the input image, and padding_end: number of pixels added to the
right/bottom of the input image.
The parameter Mode specifies the mode of the pooling operation. Supported modes are:
’average’: The resulting pixel value is the average of all pixel values in the filter.
’maximum’: The resulting pixel value is the maximum of all pixel values in the filter.
’global_average’: Same as mode ’average’, but without the knowledge of the spatial dimensions of the input, it is
possible to define the desired output dimensions via the parameter KernelSize. E.g., if the average over
all pixel values of the input shall be returned, set the KernelSize to 1 and the output width and height
is equal to 1. The internally used kernel size and stride are calculated as follows:
• If KernelSize is a divisor of the input dimensions: The internally used kernel size and stride are both
set to the value input_dim/KernelSize.
• If KernelSize is not a divisor of the input dimension: The calculation of the internally used kernel
size and stride depend on the generic parameter ’global_pooling_mode’:
’overlapping’: The internally used stride is set to binput_dim/KernelSizec. The internally used
kernel size is then computed as input_dim−(KernelSize−1)·stride. This leads to overlapping
kernels but the whole input image is taken into account for the computation of the output.
’non_overlapping’: The internally used kernel size and stride are set to the same value
binput_dim/KernelSizec. This leads to non-overlapping pooling kernels, but parts of the input
image at the right or bottom border might not be considered when computing the output. In this
mode, due to rounding the output size is not always equal to the size given by KernelSize.
HALCON 24.11.1.0
700 CHAPTER 9 DEEP LEARNING
’adaptive’: In this mode, for each pixel (k, l) of the output, the size of the corresponding pooling area
within the input is computed adaptively, where k are the row and l are the column indices of the
output. The row indices of the pooling area for pixels of the k-th output row are given by [bk ·
input_dim/KernelSizec, d(k + 1) · input_dim/KernelSizee), where in this case the height
of the KernelSize is used. The computation of the column coordinates is done analogously. This
means that neighboring pooling areas can have a different size which can lead to a less efficient
implementation. However, the pooling areas are only overlapping by one pixel which is generally
less overlap than for ’global_pooling_mode’ ’overlapping’. The whole input image is taken into
account for the computation of the output. For this mode, the parameter Padding must be set to
’none’.
For this mode the parameter Stride is ignored and calculated internally as described above.
’global_maximum’: Same as mode ’global_average’, but the maximum is calculated instead of the average.
For more information about the pooling layer see the “Solution Guide on Classification”.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’global_pooling_mode’: Mode for calculation of the internally used kernel size and stride in case of global pooling
(Mode ’global_average’ or ’global_maximum’). See description above. In case of a non-global pooling the
parameter is set to the value ’undefined’.
Default: ’overlapping’
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_pooling can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. KernelSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Width and height of the filter kernels.
Default: [2,2]
. Stride (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number-array ; integer
Bi-dimensional amount of filter shift.
Default: [2,2]
. Padding (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number(-array) ; string / integer
Padding type or specific padding size.
Default: ’none’
Suggested values: Padding ∈ {’none’, ’half_kernel_size’, ’implicit’}
. Mode (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; string
Mode of pooling operation.
Default: ’maximum’
List of values: Mode ∈ {’maximum’, ’average’, ’global_maximum’, ’global_average’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’global_pooling_mode’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’adaptive’, ’non_overlapping’, ’overlapping’, ’true’, ’false’, 1.0,
0.9, 0.0}
. DLLayerPooling (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Pooling layer.
Execution Information
HALCON 24.11.1.0
702 CHAPTER 9 DEEP LEARNING
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
’div_eps’: Small scalar value that is used to stabilize the training. I.e., in case of a division, the value is added to
the denominator to prevent a division by zero.
Default: 1e-10
Certain parameters of layers created using this operator create_dl_layer_reduce can be set and re-
trieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding input layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Operation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Reduce operation.
Default: ’norm_l2’
List of values: Operation ∈ {’norm_l2’, ’sum’}
. Axes (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer / string
Axes to which the reduce operation is applied.
Default: [2,3]
List of values: Axes ∈ {1, 2, 3, ’width’, ’height’, ’depth’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’div_eps’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {1e-10, ’true’, ’false’}
. DLLayerReduce (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Reduce layer.
Example
Execution Information
For a model that was created using create_dl_model the model’s batch size should always be settable with
set_dl_model_param. Hence, either the output batch size of the reshape layer equals the batch size of the
model (batch size in Shape set to 0), or at least one reshape dimension should be calculated automatically (one
value in Shape set to -1).
HALCON 24.11.1.0
704 CHAPTER 9 DEEP LEARNING
If the batch size is specified and it is not set to 0, at least one dimension of Shape must be set to -1. This is nec-
essary, because for a model created with create_dl_model, the model’s batch size should always be settable
with set_dl_model_param. Hence, either the output batch size of the reshape layer equals the batch size of
the model (batch size in Shape set to 0), or at least one reshape dimension should be calculated automatically
(one value in Shape set to -1). In case the batch size is not specified it is set to 0, which leads to an output batch
size equal to the input one.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_reshape can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
Execution Information
exp(xi )
Sof tmax(xi ) = PN −1
j=0 exp(xj )
where N is the number of inputs. During training, the result of the softmax function is transformed by a logarithm
function, such that the values are suitable as input to e.g., a cross entropy loss layer. This behavior can be changed
by setting the generic parameter ’output_mode’, see below.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’output_mode’: This parameter determines if and in which case the output is transformed by a logarithm function:
• ’default’: During inference, the result of the softmax function is returned as output while during training,
the softmax is further transformed by a logarithm function.
• ’no_log_training’: During training the result of the softmax function is not transformed by a logarithm
function.
HALCON 24.11.1.0
706 CHAPTER 9 DEEP LEARNING
• ’log_inference’: The logarithm of the softmax is calculated during inference in the same way as during
training.
Default: ’default’.
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_softmax can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’output_mode’, ’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’default’, ’no_log_training’, ’log_inference’, ’true’, ’false’}
. DLLayerSoftMax (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Softmax layer.
Execution Information
create_dl_layer_transposed_convolution ( : : DLLayerInput,
LayerName, KernelSize, Stride, KernelDepth, Groups, Padding,
GenParamName, GenParamValue : DLLayerTransposedConvolution )
• ’half_kernel_size’: The integer value of Padding in the formula above depends on the specified
KernelSize. More precisely, it is calculated as bKernelSize/2c.
• ’none’: The value of Padding in the formula above is 0.
• Number of pixels: Specify the integer value of Padding in the formula above for each border. To do so, the
following tuple lengths are supported:
– Single number: Padding value for all four directions left/right/top/bottom.
– Two numbers: Padding value for left/right and top/bottom: [l/r, t/b].
– Four numbers: Padding value for left, right, top, bottom side: [l,r,t,b].
Restriction: ’runtime’ ’gpu’ does not support asymmetric padding, i.e., the padding values for the left
and right side must be equal, as well as the padding values for the top and bottom side.
Restriction: The integer padding values must be smaller than the value set for KernelSize.
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
HALCON 24.11.1.0
708 CHAPTER 9 DEEP LEARNING
’weight_filler’: Defines the mode how the weights are initialized. See create_dl_layer_convolution
for a detailed explanation of this parameter and its values.
List of values: ’xavier’, ’msra’, ’const’
Default: ’xavier’
’weight_filler_const_val’: See create_dl_layer_convolution for a detailed explanation of this parame-
ter and its values.
Default: 0.5
’weight_filler_variance_norm’: Value range for ’weight_filler’. See create_dl_layer_convolution for
a detailed explanation of this parameter and its values.
List of values: ’norm_average’, ’norm_in’, ’norm_out’, constant value (in combination with ’weight_filler’
= ’msra’)
Default: ’norm_in’
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. KernelSize (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Width and height of the filter kernels.
Default: 3
. Stride (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Amount of filter shift.
Default: 1
Module
Deep Learning Professional
’true’: The transformation is applied in the HALCON Non-Standard Cartesian coordinate system (edge-centered,
with the origin in the upper left corner, see chapter Transformations / 2D Transformations). Using the x axis
as an example, this leads to:
HALCON 24.11.1.0
710 CHAPTER 9 DEEP LEARNING
’false’: The transformation is applied in the HALCON standard coordinate system (pixel centered, with the origin
in the center of the upper left pixel, see chapter Transformations / 2D Transformations). Using the x axis as
an example, this leads to:
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. ScaleWidth (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Ratio output/input width of the layer.
Default: 2.0
. ScaleHeight (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; real / integer
Ratio output/input height of the layer.
Default: 2.0
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of interpolation.
Default: ’bilinear’
List of values: Interpolation ∈ {’bilinear’}
. AlignCorners (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of coordinate transformation between output/input images.
Default: ’false’
List of values: AlignCorners ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
’true’: The transformation is applied in the HALCON Non-Standard Cartesian coordinate system (edge-centered,
with the origin in the upper left corner, see chapter Transformations / 2D Transformations). Using the x axis
as an example, this leads to:
’false’: The transformation is applied in the HALCON standard coordinate system (pixel centered, with the origin
in the center of the upper left pixel, see chapter Transformations / 2D Transformations). Using the x axis as
an example, this leads to:
The following generic parameters GenParamName and the corresponding values GenParamValue are sup-
ported:
’is_inference_output’: Determines whether apply_dl_model will include the output of this layer in the dictio-
nary DLResultBatch even without specifying this layer in Outputs (’true’) or not (’false’).
Default: ’false’
Certain parameters of layers created using this operator create_dl_layer_zoom_size can be set and
retrieved using further operators. The following tables give an overview, which parameters can be set using
set_dl_model_layer_param and which ones can be retrieved using get_dl_model_layer_param
or get_dl_layer_param. Note, the operators set_dl_model_layer_param and
get_dl_model_layer_param require a model created by create_dl_model.
HALCON 24.11.1.0
712 CHAPTER 9 DEEP LEARNING
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Width (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Absolute width of the output layer.
Default: 100
. Height (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number ; integer
Absolute height of the output layer.
Default: 100
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of interpolation.
Default: ’bilinear’
List of values: Interpolation ∈ {’bilinear’}
. AlignCorners (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of coordinate transformation between output/input images.
Default: ’false’
List of values: AlignCorners ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerZoom (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Zoom layer.
Execution Information
create_dl_layer_zoom_to_layer_size ( : : DLLayerInput,
DLLayerReference, LayerName, Interpolation, AlignCorners,
GenParamName, GenParamValue : DLLayerZoom )
HALCON 24.11.1.0
714 CHAPTER 9 DEEP LEARNING
Parameters
. DLLayerInput (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Feeding layer.
. DLLayerReference (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Reference layer to define the output size.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the output layer.
. Interpolation (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Mode of interpolation.
Default: ’bilinear’
List of values: Interpolation ∈ {’bilinear’}
. AlignCorners (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Type of coordinate transformation between output/input images.
Default: ’false’
List of values: AlignCorners ∈ {’true’, ’false’}
. GenParamName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .attribute.name(-array) ; string
Generic input parameter names.
Default: []
List of values: GenParamName ∈ {’is_inference_output’}
. GenParamValue (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . attribute.value(-array) ; string / integer / real
Generic input parameter values.
Default: []
Suggested values: GenParamValue ∈ {’true’, ’false’}
. DLLayerZoom (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer ; handle
Zoom layer.
Execution Information
When the graph is defined, a model can be created using create_dl_model by passing over the graphs output
layer handles in OutputLayers. Note that the output layer handles save all other layers that directly or indirectly
serve as feeding input layers for the output layers during their creation. This means that the output layer handles
keep the whole network architecture necessary for the creation of the model using create_dl_model.
The type of the created model, hence the task the model is designed for (classification, object detection, segmen-
tation), is only given by the networks architecture. However, if the networks architecture allows it, the type of the
model, ’type’, can be set using set_dl_model_param. A specified model type allows a more user friendly
usage in the HALCON deep learning workflow. Supported types are:
’generic’: This is the default model type. The task the model’s neuronal network can solve is defined by its
architecture. When apply_dl_model is applied for inference, the operator returns the activations of the
output layers. To train the model using train_dl_model_batch, the underlying graph requires loss
layers.
’classification’: The model is specified for classification and all layers required for training the model are adapted
to the model. When apply_dl_model is applied for inference, the output is adapted according to the type,
see apply_dl_model for more details. See Deep Learning / Classification for further information.
In addition, the operator gen_dl_model_heatmap can be used to display the models heatmap.
’detection’: The model is specified for object detection and instance segmentation and all layers and anchors
required for training the model are adapted to the model. When apply_dl_model is applied for inference,
the output is adapted according to the type, see apply_dl_model for more details. See Deep Learning /
Object Detection and Instance Segmentation for further information.
’multi_label_classification’: The model is specified for multi-label classification and all layers required for train-
ing the model are adapted to the model. When apply_dl_model is applied for inference, the output is
adapted according to the type, see apply_dl_model for more details. See Deep Learning / Multi-Label
Classification for further information.
’segmentation’: The model is specified for semantic segmentation or edge extraction respectively and all layers
required for training the model are adapted to the model. When apply_dl_model is applied for inference,
the output is adapted according to the type, see apply_dl_model for more details. See Deep Learning /
Semantic Segmentation and Edge Extraction for further information.
Furthermore, many deep learning procedures provide more functionality for the model if its type is set. As an
example, dev_display_dl_data can be used to display the inferred results more nicely.
Note that setting a model type requires that the graph fulfills certain structure conditions. We recommend to follow
the architecture of our delivered neuronal networks if the model type should be set to one of these types.
Parameters
. OutputLayers (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Output layers of the graph.
. DLModelHandle (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .dl_model ; handle
Handle of the deep learning model.
Result
If the parameters are valid, the operator create_dl_model returns the value 2 (H_MSG_TRUE). If necessary,
an exception is raised.
Execution Information
HALCON 24.11.1.0
716 CHAPTER 9 DEEP LEARNING
Module
Deep Learning Professional
Create a deep copy of the layers and all of their graph ancestors in a given deep learning model.
The operator get_dl_model_layer creates a deep copy of every layer named in LayerNames and all their
graph ancestors in the deep learning model DLModelHandle. You can retrieve the unique layer names using
get_dl_model_param with its option ’summary’.
You might use the output layers returned in DLLayers as inputs to the create_dl_layer_* and
create_dl_model operators in order to create novel model architectures based on existing models.
If you want to get multiple layers of a single model, these layers have to be specified as a LayerNames tuple in
a single call to get_dl_model_layer. Doing so, you avoid multiple deep copies of graph ancestors that are
potentially shared by the layers.
Example:
get_dl_model_layer(DLModelHandleOrig, [’layer_name_3’,
’layer_name_6’], DLLayersOutput)
create_dl_model([DLLayersOutput], DLModelHandle)
Please note, that the output layers are copies. They contain the same weights and settings as in the given input
model but they are unique copies. You cannot alter the existing model by changing the output layers.
Parameters
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Deep learning model.
. LayerNames (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string(-array) ; string
Names of the layers to be copied.
. DLLayers (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_layer(-array) ; handle
Copies of layers and all of their ancestors.
Execution Information
HALCON 24.11.1.0
718 CHAPTER 9 DEEP LEARNING
• ’batchnorm_mean’: Batch-wise calculated mean values to normalize the inputs. For further information,
please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’batchnorm_mean_avg’: Average of the batch-wise calculated mean values to normalize the inputs. For
further information, please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’batchnorm_variance’: Batch-wise calculated variance values to normalize the inputs. For further informa-
tion, please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’batchnorm_variance_avg’: Average of the batch-wise calculated variance values to normalize the inputs.
For further information, please refer to create_dl_layer_batch_normalization.
Restriction: This value is only supported if the layer is of type ’batchnorm’.
• ’bias’: Biases of the layer.
• ’bias_gradient’: Gradients of the biases of the layer.
• ’bias_gradient_norm_l2’: Gradients of the biases of the layer in terms of L2 norm.
• ’bias_norm_l2’: Biases of the layer in terms of L2 norm.
• ’bias_update’: Update of the biases of the layer. This is used in e.g., a solver which uses the last update.
• ’bias_update_norm_l2’: Update of the biases of the layer in terms of L2 norm. This is used in a solver which
uses the last update.
• ’weights’: Weights of the layer.
• ’weights_gradient’: Gradients of the weights of the layer.
• ’weights_gradient_norm_l2’: Gradients of the weights of the layer in terms of L2 norm.
• ’weights_norm_l2’: Weights of the layer in terms of L2 norm.
• ’weights_update’: Update of the weights of the layer. This is used in a solver which uses the last update.
• ’weights_update_norm_l2’: Update of the weights of the layer in terms of L2 norm. This is used in a solver
which uses the last update.
HALCON 24.11.1.0
720 CHAPTER 9 DEEP LEARNING
The following tables give an overview, which parameters for WeightsType can be
set using set_dl_model_layer_weights and which ones can be retrieved using
get_dl_model_layer_weights.
Attention
The operator get_dl_model_layer_weights is only applicable to self-created networks. For networks
delivered by HALCON, the operator returns an empty tuple.
Parameters
. Weights (output_object) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . image(-array) ; object : real
Output weights.
. DLModelHandle (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the deep learning model.
. LayerName (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Name of the layer to be queried.
. WeightsType (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . string ; string
Selected type of layer values to be returned.
Default: ’weights’
List of values: WeightsType ∈ {’weights’, ’weights_norm_l2’, ’weights_update’,
’weights_update_norm_l2’, ’weights_gradient’, ’weights_gradient_norm_l2’, ’bias’, ’bias_norm_l2’,
’bias_update’, ’bias_update_norm_l2’, ’bias_gradient’, ’bias_gradient_norm_l2’, ’batchnorm_mean’,
’batchnorm_variance’, ’batchnorm_mean_avg’, ’batchnorm_variance_avg’}
Example
*
set_dl_model_param (DLModelHandle, 'type', 'classification')
set_dl_model_param (DLModelHandle, 'batch_size', 1)
set_dl_model_param (DLModelHandle, 'runtime', 'gpu')
set_dl_model_param (DLModelHandle, 'runtime_init', 'immediately')
*
* Train for 5 iterations.
for TrainIterations := 1 to NumTrainIterations by 1
train_dl_model_batch (DLModelHandle, DLSample, DLTrainResult)
endfor
*
* Get the gradients, weights, and activations.
get_dl_model_layer_gradients (GradientsSoftmax, DLModelHandle, 'softmax')
get_dl_model_layer_gradients (GradientsDense, DLModelHandle, 'dense')
get_dl_model_layer_gradients (GradientsConv, DLModelHandle, 'conv')
*
get_dl_model_layer_weights (WeightsDense, DLModelHandle, 'dense',\
'weights_gradient')
get_dl_model_layer_weights (WeightsConv, DLModelHandle, 'conv',\
'weights_gradient')
*
get_dl_model_layer_activations (ActivationsDense, DLModelHandle, 'dense')
get_dl_model_layer_activations (ActivationsConv, DLModelHandle, 'conv')
Execution Information
load_dl_model_weights ( : : DLModelHandleSource,
DLModelHandleTarget : ChangesByLayer )
HALCON 24.11.1.0
722 CHAPTER 9 DEEP LEARNING
Parameters
. DLModelHandleSource (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the source deep learning model.
. DLModelHandleTarget (input_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dl_model ; handle
Handle of the target deep learning model.
. ChangesByLayer (output_control) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . integer(-array) ; integer
Indicates for every target layer how many weights changed.
Result
If the parameters are valid, the operator load_dl_model_weights returns the value 2 (H_MSG_TRUE). If
necessary, an exception is raised.
Execution Information
Module
Foundation. This operator uses dynamic licensing (see the ’Installation Guide’). Which of the following modules
is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Deep Learning Professional