We present a map-less path planning algorithm based on Deep Reinforcement Learning (DRL) for mobi... more We present a map-less path planning algorithm based on Deep Reinforcement Learning (DRL) for mobile robots navigating in unknown environment that only relies on 40-dimensional raw laser data and odometry information. The planner is trained using a reward function shaped based on the online knowledge of the map of the training environment, obtained using grid-based Rao-Blackwellized particle filter, in an attempt to enhance the obstacle awareness of the agent. The agent is trained in a complex simulated environment and evaluated in two unseen ones. We show that the policy trained using the introduced reward function not only outperforms standard reward functions in terms of convergence speed, by a reduction of 36.9\% of the iteration steps, and reduction of the collision samples, but it also drastically improves the behaviour of the agent in unseen environments, respectively by 23\% in a simpler workspace and by 45\% in a more clustered one. Furthermore, the policy trained in the sim...
We developed Wild Photoshoot, a game that uses naturally-occurring neurophysiological activity to... more We developed Wild Photoshoot, a game that uses naturally-occurring neurophysiological activity to augment the interaction in a virtual environment in an intuitive way. In this game, the user is a wildlife photographer. Besides normal movement controls (mouse and keyboard), the camera is adjusted according to where the user is looking (overt attention, OA). When the animal has been found, the user will have to use covert attention (CA) (Van Gerven et al., 2009) to take the picture, because when the user looks at the animal directly, it will flee. The mental tasks for OA and CA come naturally given the situation. Initial offline tests assessed the performance of EEG-based CA and EOG-based OA. For CA, the average accuracy was 67% (2 classes, 4 participants), with the pipeline: common average reference, band pass 8-14 Hz, whitening, covariance and logistic regression. The pipeline for OA is based on Barea et al., 2003 and Itakura and Sakamoto, 2010: band pass 0.05-20 Hz, derivation, threshold, integration, and linear regression. For horizontal eye movement the average error was 2.2cm, and for vertical eye movement 4.8cm (4 participants). Although BCIs are the last option for interaction for those patients who have no residual muscle control, there are also patients with limited control, who could benefit from a hybrid BCI setup which combines these two inputs. The naturalness of these inputs can make BCIs easy to use; an aspect that will be appreciated by both patients and healthy users.
An agent-based computational model allows researchers to simulate the outcome of complex interact... more An agent-based computational model allows researchers to simulate the outcome of complex interactions. Simulations are done by creating a virtual environment in which a large number of autonomous agents operate. Each of the agents follows a micro-level specification that is relatively simple, but when brought together they can interact in highly complex ways (Epstein, 1999). This modelling methodology has a long history, dating back to the work of John von Neumann with his "universal constructors" and "cellular automata" (von Neumann & Burks, 1966). These small programs could interact and reproduce and as such were capable of forming a virtual society... at least in theory. Even though authors like Thomas Schelling already suggested using these automata to model the social sciences (Schelling, 1978), at that time the computing capacity was too limited to put these ideas in practice (Epstein & Axtell, 1996). In the last few decades, advances in computing power have caused a surge of interest in agent-based models (ABMs). Increasingly, the methodology has been used to work on the many challenges of the social science. In particular, Arthur (1991) and Holland & Miller (1991) introduced the technique to economic modelling. Agentbased models, they argued, would not just model the virtual society's behaviour at a micro-level, but also attempt to uncover the motivations and processes underlying that behaviour.
At vital moments in professional soccer matches, penalties were often missed. Psychological facto... more At vital moments in professional soccer matches, penalties were often missed. Psychological factors, such as anxiety and pressure, are among the critical causes of the mistakes, commonly known aschoking under pressure. Nevertheless, the factors have not been fully explored. In this study, we used functional near-infrared spectroscopy (fNIRS) to investigate the influence of the brain on this process. Anin-situstudy was set-up (N= 22), in which each participant took 15 penalties under three different pressure conditions: without a goalkeeper, with an amiable goalkeeper, and with a competitive goalkeeper. Both experienced and inexperienced soccer players were recruited, and the brain activation was compared across groups. Besides, fNIRS activation was compared between sessions that participants felt anxious against sessions without anxiety report, and between penalty-scoring and -missing sessions. The results show that the task-relevant brain region, the motor cortex, was more activate...
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, 2015
Advances in the field of touch recognition could open up applications for touch-based interaction... more Advances in the field of touch recognition could open up applications for touch-based interaction in areas such as Human-Robot Interaction (HRI). We extended this challenge to the research community working on multimodal interaction with the goal of sparking interest in the touch modality and to promote exploration of the use of data processing techniques from other more mature modalities for touch recognition. Two data sets were made available containing labeled pressure sensor data of social touch gestures that were performed by touching a touch-sensitive surface with the hand. Each set was collected from similar sensor grids, but under conditions reflecting different application orientations: CoST: Corpus of Social Touch and HAART: The Human-Animal Affective Robot Touch gesture set. In this paper we describe the challenge protocol and summarize the results from the touch challenge hosted in conjunction with the 2015 ACM International Conference on Multimodal Interaction (ICMI). The most important outcomes of the challenges were: (1) transferring techniques from other modalities, such as image processing, speech, and human action recognition provided valuable feature sets; (2) gesture classification confusions were similar despite the various data processing methods used.
Poel M., Comparison of Silhouette Shape Descriptors for Example-based Human Pose Recovery
Automatically recovering human poses from visual input is useful but challenging due to variation... more Automatically recovering human poses from visual input is useful but challenging due to variations in image space and the high dimensionality of the pose space. In this paper, we assume that a human silhouette can be extracted from monocular visual input. We compare three shape descriptors that are used in the encoding of silhouettes: Fourier descriptors, shape contexts and Hu moments. An examplebased approach is taken to recover upper body poses from these descriptors. We perform experiments with deformed silhouettes to test each descriptor’s robustness against variations in body dimensions, viewpoint and noise. It is shown that Fourier descriptors and shape context histograms outperform Hu moments for all deformations. 1
The Future of Brain/Neural Computer Interaction: Horizon 2020
The main objective of this roadmap is to provide a global perspective on the BCI field now and in... more The main objective of this roadmap is to provide a global perspective on the BCI field now and in the future. For readers not familiar with BCIs, we introduce basic terminology and concepts. We discuss what BCIs are, what BCIs can do, and who can benefit from BCIs. We illustrate our arguments with use cases to support the main messages. After reading this roadmap you will have a clear picture of the potential benefits and challenges of BCIs, the steps necessary to bridge the gap between current and future applications, and the potential impact of BCIs on society in the next decade and beyond.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2012
Does using a brain-computer interface (BCI) influence the social interaction between people when ... more Does using a brain-computer interface (BCI) influence the social interaction between people when playing a cooperative game? By measuring the amount of speech, utterances, instrumental gestures and empathic gestures during a cooperative game where two participants had to reach a common goal, and questioning participants about their own experience afterwards this study attempts to provide answers to this question. Three selection methods are compared; point and click, BCI and timed selection which is a selection method similar in difficulty as BCI selection. The results show that social interaction changed when using a BCI compared to using point and click, there was a higher amount of utterances and empathic gestures. This indicates that the participants automatically reacted more to the higher difficultly of the BCI selection method. Participants also reported that they felt they cooperated better during the use of the point and click. Preface This master thesis is the result of the study performed during my graduation at the University of Twente in Enschede. This study has been partly published during the International Conference on Intelligent Technologies for Interactive Entertainment 2011 (Appendix E). There are a few people to whom I wish to express my appreciation. First I wish to thank my examination committee: Prof. dr. ir. Anton Nijholt, Dr. Mannes Poel, Dr. Job Zwiers, Hayrettin Gürkök M.Sc and Danny Plass-Oude Bos M.Sc. With a special thanks to Hayrettin Gürkök and Danny Plass-Oude Bos for their continual input and support that significantly improved the quality of this work, and for the opportunities that I got to demonstrate and publish this research outside the university's boundaries. Next I wish to thank my family as they supported me and made it possible for me to pursue the educational career of my own choosing. Another thanks goes to my co-students for all the cooperation and friendships developed during the last years. Especially Gido Hakvoort for his cooperation, input, help with annotation and the discussions on both our works, and together with his brother Michiel Hakvoort for the great times outside either of our studies.
The Nintendo DS TM is a hand held game computer that includes a small sketch pad as one of it inp... more The Nintendo DS TM is a hand held game computer that includes a small sketch pad as one of it input modalities. We discuss the possibilities for recognition of simple line drawing on this device, with focus of attention on robustness and real-time behavior. The results of our experiments show that with devices that are now becoming available in the consumer market, effective image recognition is possible, provided a clear application domain is selected. In our case, this domain was the usage of simple images as input modality for computer games that are typical for small hand held devices.
Proceedings of the 16th International Conference on Multimodal Interaction, 2014
Touch behavior is of great importance during social interaction. To transfer the tactile modality... more Touch behavior is of great importance during social interaction. To transfer the tactile modality from interpersonal interaction to other areas such as Human-Robot Interaction (HRI) and remote communication automatic recognition of social touch is necessary. This paper introduces CoST: Corpus of Social Touch, a collection containing 7805 instances of 14 different social touch gestures. The gestures were performed in three variations: gentle, normal and rough, on a sensor grid wrapped around a mannequin arm. Recognition of the rough variations of these 14 gesture classes using Bayesian classifiers and Support Vector Machines (SVMs) resulted in an overall accuracy of 54% and 53%, respectively. Furthermore, this paper provides more insight into the challenges of automatic recognition of social touch gestures, including which gestures can be recognized more easily and which are more difficult to recognize.
Recently research into Brain-Computer Interfacing (BCI) applications for healthy users, such as g... more Recently research into Brain-Computer Interfacing (BCI) applications for healthy users, such as games, has been initiated. But why would a healthy person use a still-unproven technology such as BCI for game interaction? BCI provides a combination of information and features that no other input modality can offer. But for general acceptance of this technology, usability and user experience will need to be taken into account when designing such systems. Therefore, this chapter gives an overview of the state of the art of BCI in games and discusses the consequences of applying knowledge from Human-Computer Interaction (HCI) to the design of BCI for games. The integration of HCI with BCI is illustrated by research examples and showcases, intended to take this promising technology out of the lab. Future
Automatic lipreading is automatic speech recognition that uses only visual information. The relev... more Automatic lipreading is automatic speech recognition that uses only visual information. The relevant data in a video signal is isolated and features are extracted from it. From a sequence of feature vectors, where every vector represents one video image, a sequence of higher level semantic elements is formed. These semantic elements are "visemes" the visual equivalent of "phonemes" The developed prototype uses a Time Delayed Neural Network to classify the visemes.
A distributed architecture for a system simulating the emotional state of an agent acting in a vi... more A distributed architecture for a system simulating the emotional state of an agent acting in a virtual environment is presented. The system is an implementation of an eventappraisal model of emotional behaviour and uses neural networks to learn how the emotional state should be influenced by the occurrence of environmental and internal stimuli. A part of the modular system is domain-independent. The system can easily be adapted for handling different events that influence the emotional state. A first prototype and a testbed for this architecture are presented.
In this paper we show that reinforcement learning can be used for minutiae detection in fingerpri... more In this paper we show that reinforcement learning can be used for minutiae detection in fingerprint matching. Minutiae are characteristic features of fingerprints that determine their uniqueness. Classical approaches use a series of image processing steps for this task, but lack robustness because they are highly sensitive to noise and image quality. We propose a more robust approach, in which an autonomous agent walks around in the fingerprint and learns how to follow ridges in the fingerprint and how to recognize minutiae. The agent is situated in the environment, the fingerprint, and uses reinforcement learning to obtain an optimal policy. Multi-layer perceptrons are used for overcoming the difficulties of the large state space. By choosing the right reward structure and learning environment, the agent is able to learn the task. One of the main difficulties is that the goal states are not easily specified, for they are part of the learning task as well. That is, the recognition of minutiae has to be learned in addition to learning how to walk over the ridges in the fingerprint. Results of successful first experiments are presented.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2018
Digital game research has been rapidly growing with studies dedicated to game experience and adop... more Digital game research has been rapidly growing with studies dedicated to game experience and adopting new technologies. Alongside, research in Brain-Computer Interfaces (BCI) is growing in game applications. Besides technical shortcomings, BCI research in gaming can also be lacking due to challenges such as poorly designed games that do not provide a fun experience to its players. In this paper we present a novel multiplayer Steady-State Visually Evoked Potential (SSVEP) game-Kessel Run-with BCI-focused cooperative mechanics, drawing attention to the impact of game design in the user experience. Twelve participants played Kessel Run using a 2-electrode cap and rated their experience in a questionnaire. The SSVEP performance was lower than expected, with an average classification accuracy of 55% and maximum of 79% at a 33% chance level. Despite low performances, players still reported a state of Flow, felt behaviorally involved and empathized with each other, finding it enjoyable to play the game together.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the ad... more This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
We present a map-less path planning algorithm based on Deep Reinforcement Learning (DRL) for mobi... more We present a map-less path planning algorithm based on Deep Reinforcement Learning (DRL) for mobile robots navigating in unknown environment that only relies on 40-dimensional raw laser data and odometry information. The planner is trained using a reward function shaped based on the online knowledge of the map of the training environment, obtained using grid-based Rao-Blackwellized particle filter, in an attempt to enhance the obstacle awareness of the agent. The agent is trained in a complex simulated environment and evaluated in two unseen ones. We show that the policy trained using the introduced reward function not only outperforms standard reward functions in terms of convergence speed, by a reduction of 36.9\% of the iteration steps, and reduction of the collision samples, but it also drastically improves the behaviour of the agent in unseen environments, respectively by 23\% in a simpler workspace and by 45\% in a more clustered one. Furthermore, the policy trained in the sim...
We developed Wild Photoshoot, a game that uses naturally-occurring neurophysiological activity to... more We developed Wild Photoshoot, a game that uses naturally-occurring neurophysiological activity to augment the interaction in a virtual environment in an intuitive way. In this game, the user is a wildlife photographer. Besides normal movement controls (mouse and keyboard), the camera is adjusted according to where the user is looking (overt attention, OA). When the animal has been found, the user will have to use covert attention (CA) (Van Gerven et al., 2009) to take the picture, because when the user looks at the animal directly, it will flee. The mental tasks for OA and CA come naturally given the situation. Initial offline tests assessed the performance of EEG-based CA and EOG-based OA. For CA, the average accuracy was 67% (2 classes, 4 participants), with the pipeline: common average reference, band pass 8-14 Hz, whitening, covariance and logistic regression. The pipeline for OA is based on Barea et al., 2003 and Itakura and Sakamoto, 2010: band pass 0.05-20 Hz, derivation, threshold, integration, and linear regression. For horizontal eye movement the average error was 2.2cm, and for vertical eye movement 4.8cm (4 participants). Although BCIs are the last option for interaction for those patients who have no residual muscle control, there are also patients with limited control, who could benefit from a hybrid BCI setup which combines these two inputs. The naturalness of these inputs can make BCIs easy to use; an aspect that will be appreciated by both patients and healthy users.
An agent-based computational model allows researchers to simulate the outcome of complex interact... more An agent-based computational model allows researchers to simulate the outcome of complex interactions. Simulations are done by creating a virtual environment in which a large number of autonomous agents operate. Each of the agents follows a micro-level specification that is relatively simple, but when brought together they can interact in highly complex ways (Epstein, 1999). This modelling methodology has a long history, dating back to the work of John von Neumann with his "universal constructors" and "cellular automata" (von Neumann & Burks, 1966). These small programs could interact and reproduce and as such were capable of forming a virtual society... at least in theory. Even though authors like Thomas Schelling already suggested using these automata to model the social sciences (Schelling, 1978), at that time the computing capacity was too limited to put these ideas in practice (Epstein & Axtell, 1996). In the last few decades, advances in computing power have caused a surge of interest in agent-based models (ABMs). Increasingly, the methodology has been used to work on the many challenges of the social science. In particular, Arthur (1991) and Holland & Miller (1991) introduced the technique to economic modelling. Agentbased models, they argued, would not just model the virtual society's behaviour at a micro-level, but also attempt to uncover the motivations and processes underlying that behaviour.
At vital moments in professional soccer matches, penalties were often missed. Psychological facto... more At vital moments in professional soccer matches, penalties were often missed. Psychological factors, such as anxiety and pressure, are among the critical causes of the mistakes, commonly known aschoking under pressure. Nevertheless, the factors have not been fully explored. In this study, we used functional near-infrared spectroscopy (fNIRS) to investigate the influence of the brain on this process. Anin-situstudy was set-up (N= 22), in which each participant took 15 penalties under three different pressure conditions: without a goalkeeper, with an amiable goalkeeper, and with a competitive goalkeeper. Both experienced and inexperienced soccer players were recruited, and the brain activation was compared across groups. Besides, fNIRS activation was compared between sessions that participants felt anxious against sessions without anxiety report, and between penalty-scoring and -missing sessions. The results show that the task-relevant brain region, the motor cortex, was more activate...
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, 2015
Advances in the field of touch recognition could open up applications for touch-based interaction... more Advances in the field of touch recognition could open up applications for touch-based interaction in areas such as Human-Robot Interaction (HRI). We extended this challenge to the research community working on multimodal interaction with the goal of sparking interest in the touch modality and to promote exploration of the use of data processing techniques from other more mature modalities for touch recognition. Two data sets were made available containing labeled pressure sensor data of social touch gestures that were performed by touching a touch-sensitive surface with the hand. Each set was collected from similar sensor grids, but under conditions reflecting different application orientations: CoST: Corpus of Social Touch and HAART: The Human-Animal Affective Robot Touch gesture set. In this paper we describe the challenge protocol and summarize the results from the touch challenge hosted in conjunction with the 2015 ACM International Conference on Multimodal Interaction (ICMI). The most important outcomes of the challenges were: (1) transferring techniques from other modalities, such as image processing, speech, and human action recognition provided valuable feature sets; (2) gesture classification confusions were similar despite the various data processing methods used.
Poel M., Comparison of Silhouette Shape Descriptors for Example-based Human Pose Recovery
Automatically recovering human poses from visual input is useful but challenging due to variation... more Automatically recovering human poses from visual input is useful but challenging due to variations in image space and the high dimensionality of the pose space. In this paper, we assume that a human silhouette can be extracted from monocular visual input. We compare three shape descriptors that are used in the encoding of silhouettes: Fourier descriptors, shape contexts and Hu moments. An examplebased approach is taken to recover upper body poses from these descriptors. We perform experiments with deformed silhouettes to test each descriptor’s robustness against variations in body dimensions, viewpoint and noise. It is shown that Fourier descriptors and shape context histograms outperform Hu moments for all deformations. 1
The Future of Brain/Neural Computer Interaction: Horizon 2020
The main objective of this roadmap is to provide a global perspective on the BCI field now and in... more The main objective of this roadmap is to provide a global perspective on the BCI field now and in the future. For readers not familiar with BCIs, we introduce basic terminology and concepts. We discuss what BCIs are, what BCIs can do, and who can benefit from BCIs. We illustrate our arguments with use cases to support the main messages. After reading this roadmap you will have a clear picture of the potential benefits and challenges of BCIs, the steps necessary to bridge the gap between current and future applications, and the potential impact of BCIs on society in the next decade and beyond.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2012
Does using a brain-computer interface (BCI) influence the social interaction between people when ... more Does using a brain-computer interface (BCI) influence the social interaction between people when playing a cooperative game? By measuring the amount of speech, utterances, instrumental gestures and empathic gestures during a cooperative game where two participants had to reach a common goal, and questioning participants about their own experience afterwards this study attempts to provide answers to this question. Three selection methods are compared; point and click, BCI and timed selection which is a selection method similar in difficulty as BCI selection. The results show that social interaction changed when using a BCI compared to using point and click, there was a higher amount of utterances and empathic gestures. This indicates that the participants automatically reacted more to the higher difficultly of the BCI selection method. Participants also reported that they felt they cooperated better during the use of the point and click. Preface This master thesis is the result of the study performed during my graduation at the University of Twente in Enschede. This study has been partly published during the International Conference on Intelligent Technologies for Interactive Entertainment 2011 (Appendix E). There are a few people to whom I wish to express my appreciation. First I wish to thank my examination committee: Prof. dr. ir. Anton Nijholt, Dr. Mannes Poel, Dr. Job Zwiers, Hayrettin Gürkök M.Sc and Danny Plass-Oude Bos M.Sc. With a special thanks to Hayrettin Gürkök and Danny Plass-Oude Bos for their continual input and support that significantly improved the quality of this work, and for the opportunities that I got to demonstrate and publish this research outside the university's boundaries. Next I wish to thank my family as they supported me and made it possible for me to pursue the educational career of my own choosing. Another thanks goes to my co-students for all the cooperation and friendships developed during the last years. Especially Gido Hakvoort for his cooperation, input, help with annotation and the discussions on both our works, and together with his brother Michiel Hakvoort for the great times outside either of our studies.
The Nintendo DS TM is a hand held game computer that includes a small sketch pad as one of it inp... more The Nintendo DS TM is a hand held game computer that includes a small sketch pad as one of it input modalities. We discuss the possibilities for recognition of simple line drawing on this device, with focus of attention on robustness and real-time behavior. The results of our experiments show that with devices that are now becoming available in the consumer market, effective image recognition is possible, provided a clear application domain is selected. In our case, this domain was the usage of simple images as input modality for computer games that are typical for small hand held devices.
Proceedings of the 16th International Conference on Multimodal Interaction, 2014
Touch behavior is of great importance during social interaction. To transfer the tactile modality... more Touch behavior is of great importance during social interaction. To transfer the tactile modality from interpersonal interaction to other areas such as Human-Robot Interaction (HRI) and remote communication automatic recognition of social touch is necessary. This paper introduces CoST: Corpus of Social Touch, a collection containing 7805 instances of 14 different social touch gestures. The gestures were performed in three variations: gentle, normal and rough, on a sensor grid wrapped around a mannequin arm. Recognition of the rough variations of these 14 gesture classes using Bayesian classifiers and Support Vector Machines (SVMs) resulted in an overall accuracy of 54% and 53%, respectively. Furthermore, this paper provides more insight into the challenges of automatic recognition of social touch gestures, including which gestures can be recognized more easily and which are more difficult to recognize.
Recently research into Brain-Computer Interfacing (BCI) applications for healthy users, such as g... more Recently research into Brain-Computer Interfacing (BCI) applications for healthy users, such as games, has been initiated. But why would a healthy person use a still-unproven technology such as BCI for game interaction? BCI provides a combination of information and features that no other input modality can offer. But for general acceptance of this technology, usability and user experience will need to be taken into account when designing such systems. Therefore, this chapter gives an overview of the state of the art of BCI in games and discusses the consequences of applying knowledge from Human-Computer Interaction (HCI) to the design of BCI for games. The integration of HCI with BCI is illustrated by research examples and showcases, intended to take this promising technology out of the lab. Future
Automatic lipreading is automatic speech recognition that uses only visual information. The relev... more Automatic lipreading is automatic speech recognition that uses only visual information. The relevant data in a video signal is isolated and features are extracted from it. From a sequence of feature vectors, where every vector represents one video image, a sequence of higher level semantic elements is formed. These semantic elements are "visemes" the visual equivalent of "phonemes" The developed prototype uses a Time Delayed Neural Network to classify the visemes.
A distributed architecture for a system simulating the emotional state of an agent acting in a vi... more A distributed architecture for a system simulating the emotional state of an agent acting in a virtual environment is presented. The system is an implementation of an eventappraisal model of emotional behaviour and uses neural networks to learn how the emotional state should be influenced by the occurrence of environmental and internal stimuli. A part of the modular system is domain-independent. The system can easily be adapted for handling different events that influence the emotional state. A first prototype and a testbed for this architecture are presented.
In this paper we show that reinforcement learning can be used for minutiae detection in fingerpri... more In this paper we show that reinforcement learning can be used for minutiae detection in fingerprint matching. Minutiae are characteristic features of fingerprints that determine their uniqueness. Classical approaches use a series of image processing steps for this task, but lack robustness because they are highly sensitive to noise and image quality. We propose a more robust approach, in which an autonomous agent walks around in the fingerprint and learns how to follow ridges in the fingerprint and how to recognize minutiae. The agent is situated in the environment, the fingerprint, and uses reinforcement learning to obtain an optimal policy. Multi-layer perceptrons are used for overcoming the difficulties of the large state space. By choosing the right reward structure and learning environment, the agent is able to learn the task. One of the main difficulties is that the goal states are not easily specified, for they are part of the learning task as well. That is, the recognition of minutiae has to be learned in addition to learning how to walk over the ridges in the fingerprint. Results of successful first experiments are presented.
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, 2018
Digital game research has been rapidly growing with studies dedicated to game experience and adop... more Digital game research has been rapidly growing with studies dedicated to game experience and adopting new technologies. Alongside, research in Brain-Computer Interfaces (BCI) is growing in game applications. Besides technical shortcomings, BCI research in gaming can also be lacking due to challenges such as poorly designed games that do not provide a fun experience to its players. In this paper we present a novel multiplayer Steady-State Visually Evoked Potential (SSVEP) game-Kessel Run-with BCI-focused cooperative mechanics, drawing attention to the impact of game design in the user experience. Twelve participants played Kessel Run using a 2-electrode cap and rated their experience in a questionnaire. The SSVEP performance was lower than expected, with an average classification accuracy of 55% and maximum of 79% at a 33% chance level. Despite low performances, players still reported a state of Flow, felt behaviorally involved and empathized with each other, finding it enjoyable to play the game together.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the ad... more This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Uploads
Papers by Mannes Poel