Figure 5 A simplification of the partitioned- convolution algorithm performed by the CLAM’s “Convolution” processing.
Related Figures (8)
Figure 3: CLAM network that renders the audio in B-Format. Figure 6: CLAM network that converts 3D-audio in B-Format to 2 channels binaural, to be listened with headphones. The technique is based in simulating how the human head and ears filters the audio frequencies depending on the source direction. For this offline application, we are indeed us- ing an specific CLAM file format which con- tains all the parameters describing the motion of sources and listener at each video frame, the zoom of the camera, etc. In the mid-term, we plan to use the same SpatDIF format in both real-time and offline applications (fig. 7). Figure 8: Exportation of Blender scene geome- tries and animations. Figure 10: Picture of the arrange of 15 speakers used whithin the development.