The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has bee... more The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the develo...
We determined the relationship between search performance with a limited field of view (FOV) and ... more We determined the relationship between search performance with a limited field of view (FOV) and several scanningand scene parameters in human observer experiments. The observers (38 trained army scouts) searched through a large search sector for a target (a camouflaged person) on a heath. From trial to trial the target appeared at a different location. With a joystick the observers scanned through a panoramic image (displayed on a PC-monitor) while the scan path was registered. Four conditions were run differing in sensor type (visual or thermal infrared) and window size (large or small). In conditions with a small window size the zoom option could be used. Detection performance was highly dependent on zoom factor and deteriorated when scan speed increased beyond a threshold value. Moreover, the distribution of scan speeds scales with the threshold speed. This indicates that the observers are aware of their limitations and choose a (near) optimal search strategy. We found no correlation between the fraction of detected targets and overall search time for the individual observers, indicating that both are independent measures of individual search performance. Search performance (fraction detected, total search time, time in view for detection) was found to be strongly related to target conspicuity. Moreover, we found the same relationship between search performance and conspicuity for visual and thermal targets. This indicates that search performance can be predicted directly by conspicuity regardless of the sensor type.
ABSTRACT We developed a simple and fast lookup-table based method to derive and apply natural day... more ABSTRACT We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the ...
We present a new method to render multi-band night-time imagery (images from sensors whose sensit... more We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera's) in natural daytime colors. The color mapping is derived from the combination of a multi-band image and a corresponding natural color daytime reference image. The mapping optimizes the match between the multi-band image and the reference image, and yields a nightvision image with a natural daytime color appearance. The lookup-table based mapping procedure is extremely simple and fast and provides object color constancy. Once it has been derived the color mapping can be deployed in real-time to different multi-band image sequences of similar scenes. Displaying night-time imagery in natural colors may help human observers to process this type of imagery faster and better, thereby improving situational awareness and reducing detection and recognition times.
The increasing availability and deployment of imaging sensors operating in multiple spectral band... more The increasing availability and deployment of imaging sensors operating in multiple spectral bands has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, the cognitive aspects of multisensor image fusion have not received much attention in the development of these methods. In this study we investigate how humans interpret visual and infrared images, and we compare the interpretation of these individual image modalities to their fused counterparts, for different image fusion schemes. This was done in an attempt to test to what degree image fusion schemes can enhance human perception of the structural layout and composition of realistic outdoor scenes. We asked human observers to manually segment the details they perceived as most prominent in a set of corresponding visual, infrared and fused images. For each scene, the segmentations of the individual input image modalities were used to derive a joint reference (''gold standard") contour image that represents the visually most salient details from both of these modalities and for that particular scene. The resulting reference images were then used to evaluate the manual segmentations of the fused images, using a precision-recall measure as the evaluation criterion. In this sense, the best fusion method provides the largest number of correctly perceived details (originating from each of the individual modalities that were used as input for the fusion scheme) and the smallest amount of false alarms (fusion artifacts or illusory details). A comparison with an objective score of subject performance indicates that the reference contour method indeed appears to characterize the performance of observers using the results of the fusion schemes. The results show that this evaluation method can provide valuable insight into the way fusion schemes combine perceptually important details from the individual input image modalities. Given a reference contour image, the method can potentially be used to design image fusion schemes that are optimally tuned to human visual perception for different applications and scenarios (e.g. environmental or weather conditions).
We present a new Tri-band Color Low-light Observation (TRICLOBS) system The TRICLOBS is an all-da... more We present a new Tri-band Color Low-light Observation (TRICLOBS) system The TRICLOBS is an all-day all-weather surveillance and navigation tool. Its sensor suite consists of two digital image intensifiers and an uncooled longwave infrared microbolometer. This sensor suite registers the visual, near-infrared and longwave infrared bands of the electromagnetic spectrum. The optical axes of the three cameras are aligned, using two dichroic beam splitters. A fast lookuptable based color transform (the Color-the-Night color mapping principle) is used to represent the TRICLOBS image in natural daylight colors (using information in the visual and NIR bands) and to maximize the detectability of thermal targets (using the LWIR signal). A bottom-up statistical visual saliency mode is deployed in the initial optimization of the color mapping for surveillance and navigation purposes. Extensive observer experiments will result in further optimization of the color representation for a range of different tasks.
ABSTRACT Current end-to-end sensor performance measures such as the TOD (Triangle Orientation Dis... more ABSTRACT Current end-to-end sensor performance measures such as the TOD (Triangle Orientation Discrimination), MRTD (Minimum Resolvable Temperature Difference), DMRT (Dynamic MRT) and MTDP (Minimum Temperature Difference Perceived) were developed to determine target acquisition performance for static imaging systems. Recent developments in sensor technology (e.g. micro-scan) and real-time image enhancement techniques (e.g. dynamic super resolution, local contrast enhancement, scene-based non-uniformity correction, etc.) require sensor performance measures that apply to dynamic imaging systems as well. Previously we showed that the TOD measure characterizes target identification performance for static imagery. Here we investigate whether the TOD measure can be used to characterize dynamic man-in-the-loop video and thermal imaging systems, in combination with several real-time image enhancement procedures. We registered video sequences representing the approach of several different target objects. First we added noise to these movie sequences. Then we applied dynamic super-resolution to improve pixel resolution and to reduce the temporal image noise. We performed observer experiments to measure TOD thresholds and identification distance thresholds, both for the original (unprocessed) and the processed (noise corrupted and enhanced noise-corrupted) dynamic video imagery. We find that the TOD measure correctly predicts target identification performance for real objects, in all tested static and dynamic conditions. We also measured observer target identification performance for moving thermal TOD tests patterns registered with a thermal imaging system in combination with for different dynamic super resolution algorithms. We compared the experimental observer results to predictions made with the Target Acquisition model NVThermIP, which is based on observer performance experiments with real targets. The observer results agree closely with the model predictions for all conditions tested.We conclude that the TOD measure is an efficient assessment tool to quantify target identification performance with dynamic imaging systems featuring dynamic image enhancement techniques.
Method for converting at least one image of a first spectrum into an image of a second spectrum, ... more Method for converting at least one image of a first spectrum into an image of a second spectrum, comprising:—recording at least one first-spectrum reference image (NRI) of at least one reference scene (RS) with a first-spectrum recording apparatus (1), the first-spectrum reference image (NRI) comprising first image portions with corresponding first-spectrum sensor reference data (b1, b2);—providing corresponding second-spectrum reference information (RGB);—providing at least one set of reference data (T1, T2) from at ...
2006 9th International Conference on Information Fusion, 2006
Subjects used the dichoptic combination of a monocular image intensifier (NVG) and a monocular un... more Subjects used the dichoptic combination of a monocular image intensifier (NVG) and a monocular uncooled microbolometer (LWIR) to detect and localise both visual targets and camouflaged thermal targets while moving through a dimly lit complex environment. The NVG imagery enabled the subjects to move freely through the environment with high accuracy, but did not mediate the detection of camouflaged thermal targets. The LWIR mode mediated the detection of camouflaged thermal targets but did not allow the detection of visual targets, and provided insufficient detail to allow accurate movement through the environment. Subjects were quite capable to dichoptically fuse the individual LWIR and NVG images, enabling them to detect all (visual and thermal) targets while moving accurately through the environment. We conclude that dichoptic fusion of NVG and LWIR imagery is quite feasible and is a simple way to provide observers with enhanced situational awareness in nighttime operations.
Eye movement recordings do not tell us whether observers are 'really looking' or whether they are... more Eye movement recordings do not tell us whether observers are 'really looking' or whether they are paying attention to something else than the visual environment. We want to determine whether an observer's main current occupation is visual or not by investigating fixation patterns and EEG. Subjects were presented with auditory and visual stimuli. In some conditions, they focused on the auditory information whereas in others they searched or judged the visual stimuli. Observers made more fixations that are less cluttered in the visual compared to the auditory tasks, and they were less variable in their average fixation location. Fixated features revealed which target the observers were looking for. Gaze was not attracted more by salient features when performing the auditory task. 8-12 Hz EEG oscillations recorded over the parieto-occipital regions were stronger during the auditory task than during visual search. Our results are directly relevant for monitoring surveillance workers.
International journal of psychophysiology : official journal of the International Organization of Psychophysiology, 2014
Learning to master a task is expected to be accompanied by a decrease in effort during task execu... more Learning to master a task is expected to be accompanied by a decrease in effort during task execution. We examine the possibility to monitor learning using physiological measures that have been reported to reflect effort or workload. Thirty-five participants performed different difficulty levels of the n-back task while a range of physiological and performance measurements were recorded. In order to dissociate non-specific time-related effects from effects of learning, we used the easiest level as a baseline condition. This condition is expected to only reflect non-specific effects of time. Performance and subjective measures confirmed more learning for the difficult level than for the easy level. The difficulty levels affected physiological variables in the way as expected, therewith showing their sensitivity. However, while most of the physiological variables were also affected by time, time-related effects were generally the same for the easy and the difficult level. Thus, in a w...
We here introduce a new experimental paradigm to induce mental stress in a quick and easy way whi... more We here introduce a new experimental paradigm to induce mental stress in a quick and easy way while adhering to ethical standards and controlling for potential confounds resulting from sensory input and body movements. In our Sing-a-Song Stress Test, participants are presented with neutral messages on a screen, interleaved with 1-min time intervals. The final message is that the participant should sing a song aloud after the interval has elapsed. Participants sit still during the whole procedure. We found that heart rate and skin conductance during the 1-min intervals following the sing-a-song stress message are substantially higher than during intervals following neutral messages. The order of magnitude of the rise is comparable to that achieved by the Trier Social Stress Test. Skin conductance increase correlates positively with experienced stress level as reported by participants. We also simulated stress detection in real time. When using both skin conductance and heart rate, stress is detected for 18 out of 20 participants, approximately 10 s after onset of the sing-a-song message. In conclusion, the Sing-a-Song Stress Test provides a quick, easy, controlled and potent way to induce mental stress and could be helpful in studies ranging from examining physiological effects of mental stress to evaluating interventions to reduce stress.
Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the ... more Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.
We developed a simple and fast lookup-table based method to derive and apply natural daylight col... more We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content.
Previously, we presented two color mapping methods for the application of daytime colors to fused... more Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their contrast and the visibility of otherwise obscured details. As a result, it has been shown that these colorizing methods lead to an increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness. A crucial step in the proposed coloring process is the choice of a suitable color mapping scheme. When both daytime color images and multiband sensor images of the same scene are available, the color mapping can be derived from matching image samples (i.e., by relating color values to sensor output signal intensities in a sample-based approach). When no exact matching reference images are available, the color transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image. In the current study, we investigated new color fusion schemes that combine the advantages of both methods (i.e., the efficiency and color constancy of the sample-based method with the ability of the statistical method to use the image of a different but somewhat similar scene as a reference image), using the correspondence between multiband sensor values and daytime colors (sample-based method) in a smooth transformation (statistical method). We designed and evaluated three new fusion schemes that focus on (i) a closer match with the daytime luminances; (ii) an improved saliency of hot targets; and (iii) an improved discriminability of materials. We performed both qualitative and quantitative analyses to assess the weak and strong points of all methods.
The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has bee... more The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm), near-infrared (NIR, 0.7-1.0μm) and long-wave infrared (LWIR, 8-14μm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the develo...
We determined the relationship between search performance with a limited field of view (FOV) and ... more We determined the relationship between search performance with a limited field of view (FOV) and several scanningand scene parameters in human observer experiments. The observers (38 trained army scouts) searched through a large search sector for a target (a camouflaged person) on a heath. From trial to trial the target appeared at a different location. With a joystick the observers scanned through a panoramic image (displayed on a PC-monitor) while the scan path was registered. Four conditions were run differing in sensor type (visual or thermal infrared) and window size (large or small). In conditions with a small window size the zoom option could be used. Detection performance was highly dependent on zoom factor and deteriorated when scan speed increased beyond a threshold value. Moreover, the distribution of scan speeds scales with the threshold speed. This indicates that the observers are aware of their limitations and choose a (near) optimal search strategy. We found no correlation between the fraction of detected targets and overall search time for the individual observers, indicating that both are independent measures of individual search performance. Search performance (fraction detected, total search time, time in view for detection) was found to be strongly related to target conspicuity. Moreover, we found the same relationship between search performance and conspicuity for visual and thermal targets. This indicates that search performance can be predicted directly by conspicuity regardless of the sensor type.
ABSTRACT We developed a simple and fast lookup-table based method to derive and apply natural day... more ABSTRACT We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content. Here we describe the ...
We present a new method to render multi-band night-time imagery (images from sensors whose sensit... more We present a new method to render multi-band night-time imagery (images from sensors whose sensitive range does not necessarily coincide with the visual part of the electromagnetic spectrum, e.g. image intensifiers, thermal camera's) in natural daytime colors. The color mapping is derived from the combination of a multi-band image and a corresponding natural color daytime reference image. The mapping optimizes the match between the multi-band image and the reference image, and yields a nightvision image with a natural daytime color appearance. The lookup-table based mapping procedure is extremely simple and fast and provides object color constancy. Once it has been derived the color mapping can be deployed in real-time to different multi-band image sequences of similar scenes. Displaying night-time imagery in natural colors may help human observers to process this type of imagery faster and better, thereby improving situational awareness and reducing detection and recognition times.
The increasing availability and deployment of imaging sensors operating in multiple spectral band... more The increasing availability and deployment of imaging sensors operating in multiple spectral bands has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, the cognitive aspects of multisensor image fusion have not received much attention in the development of these methods. In this study we investigate how humans interpret visual and infrared images, and we compare the interpretation of these individual image modalities to their fused counterparts, for different image fusion schemes. This was done in an attempt to test to what degree image fusion schemes can enhance human perception of the structural layout and composition of realistic outdoor scenes. We asked human observers to manually segment the details they perceived as most prominent in a set of corresponding visual, infrared and fused images. For each scene, the segmentations of the individual input image modalities were used to derive a joint reference (''gold standard") contour image that represents the visually most salient details from both of these modalities and for that particular scene. The resulting reference images were then used to evaluate the manual segmentations of the fused images, using a precision-recall measure as the evaluation criterion. In this sense, the best fusion method provides the largest number of correctly perceived details (originating from each of the individual modalities that were used as input for the fusion scheme) and the smallest amount of false alarms (fusion artifacts or illusory details). A comparison with an objective score of subject performance indicates that the reference contour method indeed appears to characterize the performance of observers using the results of the fusion schemes. The results show that this evaluation method can provide valuable insight into the way fusion schemes combine perceptually important details from the individual input image modalities. Given a reference contour image, the method can potentially be used to design image fusion schemes that are optimally tuned to human visual perception for different applications and scenarios (e.g. environmental or weather conditions).
We present a new Tri-band Color Low-light Observation (TRICLOBS) system The TRICLOBS is an all-da... more We present a new Tri-band Color Low-light Observation (TRICLOBS) system The TRICLOBS is an all-day all-weather surveillance and navigation tool. Its sensor suite consists of two digital image intensifiers and an uncooled longwave infrared microbolometer. This sensor suite registers the visual, near-infrared and longwave infrared bands of the electromagnetic spectrum. The optical axes of the three cameras are aligned, using two dichroic beam splitters. A fast lookuptable based color transform (the Color-the-Night color mapping principle) is used to represent the TRICLOBS image in natural daylight colors (using information in the visual and NIR bands) and to maximize the detectability of thermal targets (using the LWIR signal). A bottom-up statistical visual saliency mode is deployed in the initial optimization of the color mapping for surveillance and navigation purposes. Extensive observer experiments will result in further optimization of the color representation for a range of different tasks.
ABSTRACT Current end-to-end sensor performance measures such as the TOD (Triangle Orientation Dis... more ABSTRACT Current end-to-end sensor performance measures such as the TOD (Triangle Orientation Discrimination), MRTD (Minimum Resolvable Temperature Difference), DMRT (Dynamic MRT) and MTDP (Minimum Temperature Difference Perceived) were developed to determine target acquisition performance for static imaging systems. Recent developments in sensor technology (e.g. micro-scan) and real-time image enhancement techniques (e.g. dynamic super resolution, local contrast enhancement, scene-based non-uniformity correction, etc.) require sensor performance measures that apply to dynamic imaging systems as well. Previously we showed that the TOD measure characterizes target identification performance for static imagery. Here we investigate whether the TOD measure can be used to characterize dynamic man-in-the-loop video and thermal imaging systems, in combination with several real-time image enhancement procedures. We registered video sequences representing the approach of several different target objects. First we added noise to these movie sequences. Then we applied dynamic super-resolution to improve pixel resolution and to reduce the temporal image noise. We performed observer experiments to measure TOD thresholds and identification distance thresholds, both for the original (unprocessed) and the processed (noise corrupted and enhanced noise-corrupted) dynamic video imagery. We find that the TOD measure correctly predicts target identification performance for real objects, in all tested static and dynamic conditions. We also measured observer target identification performance for moving thermal TOD tests patterns registered with a thermal imaging system in combination with for different dynamic super resolution algorithms. We compared the experimental observer results to predictions made with the Target Acquisition model NVThermIP, which is based on observer performance experiments with real targets. The observer results agree closely with the model predictions for all conditions tested.We conclude that the TOD measure is an efficient assessment tool to quantify target identification performance with dynamic imaging systems featuring dynamic image enhancement techniques.
Method for converting at least one image of a first spectrum into an image of a second spectrum, ... more Method for converting at least one image of a first spectrum into an image of a second spectrum, comprising:—recording at least one first-spectrum reference image (NRI) of at least one reference scene (RS) with a first-spectrum recording apparatus (1), the first-spectrum reference image (NRI) comprising first image portions with corresponding first-spectrum sensor reference data (b1, b2);—providing corresponding second-spectrum reference information (RGB);—providing at least one set of reference data (T1, T2) from at ...
2006 9th International Conference on Information Fusion, 2006
Subjects used the dichoptic combination of a monocular image intensifier (NVG) and a monocular un... more Subjects used the dichoptic combination of a monocular image intensifier (NVG) and a monocular uncooled microbolometer (LWIR) to detect and localise both visual targets and camouflaged thermal targets while moving through a dimly lit complex environment. The NVG imagery enabled the subjects to move freely through the environment with high accuracy, but did not mediate the detection of camouflaged thermal targets. The LWIR mode mediated the detection of camouflaged thermal targets but did not allow the detection of visual targets, and provided insufficient detail to allow accurate movement through the environment. Subjects were quite capable to dichoptically fuse the individual LWIR and NVG images, enabling them to detect all (visual and thermal) targets while moving accurately through the environment. We conclude that dichoptic fusion of NVG and LWIR imagery is quite feasible and is a simple way to provide observers with enhanced situational awareness in nighttime operations.
Eye movement recordings do not tell us whether observers are 'really looking' or whether they are... more Eye movement recordings do not tell us whether observers are 'really looking' or whether they are paying attention to something else than the visual environment. We want to determine whether an observer's main current occupation is visual or not by investigating fixation patterns and EEG. Subjects were presented with auditory and visual stimuli. In some conditions, they focused on the auditory information whereas in others they searched or judged the visual stimuli. Observers made more fixations that are less cluttered in the visual compared to the auditory tasks, and they were less variable in their average fixation location. Fixated features revealed which target the observers were looking for. Gaze was not attracted more by salient features when performing the auditory task. 8-12 Hz EEG oscillations recorded over the parieto-occipital regions were stronger during the auditory task than during visual search. Our results are directly relevant for monitoring surveillance workers.
International journal of psychophysiology : official journal of the International Organization of Psychophysiology, 2014
Learning to master a task is expected to be accompanied by a decrease in effort during task execu... more Learning to master a task is expected to be accompanied by a decrease in effort during task execution. We examine the possibility to monitor learning using physiological measures that have been reported to reflect effort or workload. Thirty-five participants performed different difficulty levels of the n-back task while a range of physiological and performance measurements were recorded. In order to dissociate non-specific time-related effects from effects of learning, we used the easiest level as a baseline condition. This condition is expected to only reflect non-specific effects of time. Performance and subjective measures confirmed more learning for the difficult level than for the easy level. The difficulty levels affected physiological variables in the way as expected, therewith showing their sensitivity. However, while most of the physiological variables were also affected by time, time-related effects were generally the same for the easy and the difficult level. Thus, in a w...
We here introduce a new experimental paradigm to induce mental stress in a quick and easy way whi... more We here introduce a new experimental paradigm to induce mental stress in a quick and easy way while adhering to ethical standards and controlling for potential confounds resulting from sensory input and body movements. In our Sing-a-Song Stress Test, participants are presented with neutral messages on a screen, interleaved with 1-min time intervals. The final message is that the participant should sing a song aloud after the interval has elapsed. Participants sit still during the whole procedure. We found that heart rate and skin conductance during the 1-min intervals following the sing-a-song stress message are substantially higher than during intervals following neutral messages. The order of magnitude of the rise is comparable to that achieved by the Trier Social Stress Test. Skin conductance increase correlates positively with experienced stress level as reported by participants. We also simulated stress detection in real time. When using both skin conductance and heart rate, stress is detected for 18 out of 20 participants, approximately 10 s after onset of the sing-a-song message. In conclusion, the Sing-a-Song Stress Test provides a quick, easy, controlled and potent way to induce mental stress and could be helpful in studies ranging from examining physiological effects of mental stress to evaluating interventions to reduce stress.
Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the ... more Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.
We developed a simple and fast lookup-table based method to derive and apply natural daylight col... more We developed a simple and fast lookup-table based method to derive and apply natural daylight colors to multi-band night-time images. The method deploys an optimal color transformation derived from a set of samples taken from a daytime color reference image. The colors in the resulting colorized multiband night-time images closely resemble the colors in the daytime color reference image. Also, object colors remain invariant under panning operations and are independent of the scene content.
Previously, we presented two color mapping methods for the application of daytime colors to fused... more Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their contrast and the visibility of otherwise obscured details. As a result, it has been shown that these colorizing methods lead to an increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness. A crucial step in the proposed coloring process is the choice of a suitable color mapping scheme. When both daytime color images and multiband sensor images of the same scene are available, the color mapping can be derived from matching image samples (i.e., by relating color values to sensor output signal intensities in a sample-based approach). When no exact matching reference images are available, the color transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image. In the current study, we investigated new color fusion schemes that combine the advantages of both methods (i.e., the efficiency and color constancy of the sample-based method with the ability of the statistical method to use the image of a different but somewhat similar scene as a reference image), using the correspondence between multiband sensor values and daytime colors (sample-based method) in a smooth transformation (statistical method). We designed and evaluated three new fusion schemes that focus on (i) a closer match with the daytime luminances; (ii) an improved saliency of hot targets; and (iii) an improved discriminability of materials. We performed both qualitative and quantitative analyses to assess the weak and strong points of all methods.
Uploads
Papers by M. Hogervorst