Human gait analysis is a standard method used for detecting and diagnosing diseases associated wi... more Human gait analysis is a standard method used for detecting and diagnosing diseases associated with gait disorders. Wearable technologies, due to their low costs and high portability, are increasingly being used in gait and other medical analyses. This paper evaluates the use of low-cost homemade textile pressure sensors to recognize gait phases. Ten sensors were integrated into stretch pants, achieving an inexpensive and pervasive solution. Nevertheless, such a simple fabrication process leads to significant sensitivity variability among sensors, hindering their adoption in precision-demanding medical applications. To tackle this issue, we evaluated the textile sensors for the classification of gait phases over three machine learning algorithms for time-series signals, namely, random forest (RF), time series forest (TSF), and multi-representation sequence learner (Mr-SEQL). Training and testing signals were generated from participants wearing the sensing pants in a test run under l...
CJ: An intelligent robotic head based on deep learning for HRI
2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA), 2018
The aim of this article is the design and construction of a robotic head intended for classificat... more The aim of this article is the design and construction of a robotic head intended for classification and recognition applications using algorithms based on Deep Learning. The robotic head works by performing a classification on a number of 1000 different objects thanks to the use of a convolutional neural network (CNN) trained with ImageNet. On the other hand, applying the transfer learning technique in a CNN, allows the recognition of faces on a database of 185 people. The robotic structure also has a voice recognition system, which allows the user to interact with the robot using different voice commands. The construction of the structure is mainly divided into 3D printed components, servomotors, and an Arduino UNO microcontroller, which oversees all the movement that the structure performs.
Due to the increase in complexity in autonomous vehicles, most of the existing control systems ar... more Due to the increase in complexity in autonomous vehicles, most of the existing control systems are proving to be inadequate. Reinforcement Learning is gaining traction as it is posed to overcome these difficulties in a natural way. This approach allows an agent that interacts with the environment to get rewards for appropriate actions, learning to improve its performance continuously. The article describes the design and development of an algorithm to control the position of a wheeled mobile robot using Reinforcement Learning. One main advantage of this approach concerning traditional control algorithms is that the learning process is carried out automatically with a recursive procedure forward in time. Moreover, given the fidelity of the model for the particular implementation described in this work, the whole learning process can be carried out in simulation. This fact avoids damages to the actual robot during the learning stage. It shows that the position control of the robot (or similar specific tasks) can be done without the need to know the dynamic model of the system explicitly. Its main drawback is that the learning stage can take a long time to finish and that it depends on the complexity of the task and the availability of adequate hardware resources. This work provides a comparison between the proposed approach and traditional existing control laws in simulation and real environments. The article also discusses the main effects of using different controlled variables in the performance of the developed control law. INDEX TERMS Mobile robot, position control, reinforcement learning.
Human activity recognition has attracted the attention of researchers around the world. This is a... more Human activity recognition has attracted the attention of researchers around the world. This is an interesting problem that can be addressed in different ways. Many approaches have been presented during the last years. These applications present solutions to recognize different kinds of activities such as if the person is walking, running, jumping, jogging, or falling, among others. Amongst all these activities, fall detection has special importance because it is a common dangerous event for people of all ages with a more negative impact on the elderly population. Usually, these applications use sensors to detect sudden changes in the movement of the person. These kinds of sensors can be embedded in smartphones, necklaces, or smart wristbands to make them ''wearable'' devices. The main inconvenience is that these devices have to be placed on the subjects' bodies. This might be uncomfortable and is not always feasible because this type of sensor must be monitored constantly, and can not be used in open spaces with unknown people. In this way, fall detection from video camera images presents some advantages over the wearable sensor-based approaches. This paper presents a vision-based approach to fall detection and activity recognition. The main contribution of the proposed method is to detect falls only by using images from a standard video-camera without the need to use environmental sensors. It carries out the detection using human skeleton estimation for features extraction. The use of human skeleton detection opens the possibility for detecting not only falls but also different kind of activities for several subjects in the same scene. So this approach can be used in real environments, where a large number of people may be present at the same time. The method is evaluated with the UP-FALL public dataset and surpasses the performance of other fall detection and activities recognition systems that use that dataset. INDEX TERMS Fall detection, deep learning, human skeleton.
This work presents the development and implementation of a distributed navigation system based on... more This work presents the development and implementation of a distributed navigation system based on computer vision. The autonomous system consists of a wheeled mobile robot with an integrated colour camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that processes them and calculates the corresponding speeds of the robot using a cascade of trained classifiers. These speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. The classifier cascade should be trained before experimentation with two sets of positive and negative images. The number of images in these sets should be considered to limit the training stage time and avoid overtraining the system.
Anomaly detection refers to the problem of finding patterns in data that do not conform to expect... more Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior. These off-normal patterns are often referred to as anomalies, outliers, discordant observations, or exceptions in different application domains. The importance of anomaly detection is due to the fact that anomalies in data frequently involve significant and critical information in many application domains. In the particular case of nuclear fusion, there are a wide variety of anomalies that could be related to particular plasma behaviors, such as disruptions or L-H transitions. In the case of unknown anomalies, they probably represent the major proportion with respect to the total anomalies that can be found in fusion. Whether the anomaly is known or not, all the anomalies in a nuclear fusion device should be detected by using the same approach, i.e. the physical state of the plasma during a shot should be reflected in some of the thousands acquired signals. In this article, we study the application of Deep Learning and a particular recurrent neural network called LSTM to detect anomalies by forecasting in a discharge.
Competency-based education is becoming increasingly adopted by higher education institutions all ... more Competency-based education is becoming increasingly adopted by higher education institutions all over the world. This paper presents a framework that assists instructors in this pedagogical paradigm and its corresponding open-source implementation. The framework supports the formal definition of competency assessment models and the students' evaluation under these models. It also provides distinct learning analytics for identifying course shortcomings and validating corrective actions instructors have introduced in a course. Finally, this paper reports the benefits of applying our framework to an engineering course at the Pontifical Catholic University, Valparaíso, Chile for three years. INDEX TERMS Competency-based education, course assessment, course monitoring, learning outcome.
Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calib... more Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks.
Remote Interoperability Protocol: A bridge between interactive interfaces and engineering systems**This work has been funded by the National Plan Project DPI2012- 31303 of the Spanish Ministry of Science and Innovation and FEDER funds
IFAC-PapersOnLine, 2015
Abstract The process of building remote or virtual laboratories to be deployed via Internet usual... more Abstract The process of building remote or virtual laboratories to be deployed via Internet usually involves communication between different software tools. Very often, there is a separation between the software which interfaces with the model or real system, and the software responsible of providing the student with an interactive and visual representation of the data provided by the engineering system. Abstracting the way these two elements communicate with each other from the particular implementation, the requirements are frequently the same: connection and session control, data transmission control, user interaction handling, etc. This work describes a generic protocol to interoperate remotely any kind of engineering software. The solution proposes to encapsulate all the communication issues into an interoperability API that can be implemented in many different systems. In order to show the flexibility of such API, an implementation to interoperate MATLAB from Java user interfaces via JSONRPC is explained in detail.
6th Chilean Conference on Pattern Recognition (CCPR), 2014
Thermal Face Recognition over time is a difficult challenge due to faces varies with different fa... more Thermal Face Recognition over time is a difficult challenge due to faces varies with different factor such as metabolism or ambient conditions. Thus, the aim of this work is to improve recognition rates of thermal faces acquired in time lapse mode, since the results available in other articles are not entirely satisfactory in this modality, which is mainly due to the large variation in the thermal characteristics of the faces in time lapses. To improve the recognition rates the approach called "Sparse Representation" was chosen. This method represents an input image as a linear combination of a dictionary composed of images of different subjects and a vector of sparse coefficients. The results are obtained using the two sets of UCHThermalFace database. The method shows high performance in the time lapse for thermal images.
A Practical Demonstration of Reset Control with the Ball and Hoop System
The new information technologies provide great opportunities in control education. One of them is... more The new information technologies provide great opportunities in control education. One of them is the use of remote control laboratories to teach the behaviour of control systems. This paper describes an approach to create interactive remote laboratories. Two main software tools are used: Simulink and Easy Java Simulations. The first one is a widely used tool in the control community, whereas the second one is an authoring tool, designed to build interactive applications in Java without special programming skills. The remote laboratories created by this approach give to students the opportunity to perform experiments with real equipments from anywhere, at anytime, and at their own pace. The paper ends with an empirical study of this approach from a pedagogical point of view.
Information Retrieval and Classification with Wavelets and Support Vector Machines
Lecture Notes in Computer Science, 2005
Abstract. Since fusion plasma experiment generates hundreds of sig-nals. In analyzing these signa... more Abstract. Since fusion plasma experiment generates hundreds of sig-nals. In analyzing these signals it is important to have automatic mecha-nisms for searching similarities and retrieving of specific data in the wave-form database. Wavelet transform (WT) is a transformation ...
Dynamic clustering and neuro-fuzzy identification for the analysis of fusion plasma signals
Abstract Measurements in long pulse devices like ITER require the use of intelligent techniques t... more Abstract Measurements in long pulse devices like ITER require the use of intelligent techniques to detect interesting events and anomalous behaviors within a continuous data flow. This detection will trigger the execution of some experimental procedures such as: increasing sampling rates, starting data sampling in additional channels or notifying the event to other diagnostics. In a first approach, an interesting event can be any non-average behavior in the expected temporal evolution of the waveforms. Therefore, a model of the ...
The paper presents the development of interactive real-time control labs using Easy Java Simulati... more The paper presents the development of interactive real-time control labs using Easy Java Simulations (Ejs). Ejs is a free software tool that allows rapid creation of interactive simulations in Java. A new TrueTime-based kernel has been designed in Ejs in order to create multitasking real-time system simulations as well as soft real-time applications. The main features of these new capabilities are presented.
International Journal of Computers Communications & Control, 2013
This paper introduces the concept of "augmented reality" as a novel way to enhance visualization ... more This paper introduces the concept of "augmented reality" as a novel way to enhance visualization in remote laboratories for engineering education. In a typical remote experimentation session, students get access to a remote real plant located at the laboratory to carry out their assignments. Usually, the graphical user interface allows users to watch the equipment by video stream in live. However, in some cases, visual feedback by video stream could be enhanced by means of augmented reality techniques, which mix together in one image, the video stream and computer generated data. Such mixture produces an added value to remote experimentation, increasing the sense of presence and reality, and helping to understand much better the concepts under study. In this work, a Java-based approach to be used in the remote experimentation context for pedagogical purposes is presented. Firstly, a pure Java example is given to readers (including the source code) and then, a more sophisticated example using a Java-based open source tool known as Easy Java Simulations is introduced. This latter option takes advantage of a new developed component, called camimage, which is an easy-to-use visual element that allows authors to capture video stream from IP cameras in order to mix real images with computer generated graphics.
In this paper, a synergy of advanced signal processing and soft computing strategies is applied i... more In this paper, a synergy of advanced signal processing and soft computing strategies is applied in order to identify different types of human brain tumors, as a help to confirm the histological diagnosis of experts and consequently to facilitate the decision about the correct treatment or the necessity of an operation. A computational tool has been developed that merges, on the one hand, wavelet transform to reduce the size of the biomedical spectra and to extract the main features, and on the other hand, Support Vector Machine and Neural Networks to classify them. The influence of some of the configuration parameters of each of those soft computing techniques on the clustering is analyzed. These two methods and another one based on medical knowledge are compared. The classification results obtained by these computational tools are promising specially taking into account that medical knowledge has not been considered and that the number of samples of each class is very low in some cases.
Human gait analysis is a standard method used for detecting and diagnosing diseases associated wi... more Human gait analysis is a standard method used for detecting and diagnosing diseases associated with gait disorders. Wearable technologies, due to their low costs and high portability, are increasingly being used in gait and other medical analyses. This paper evaluates the use of low-cost homemade textile pressure sensors to recognize gait phases. Ten sensors were integrated into stretch pants, achieving an inexpensive and pervasive solution. Nevertheless, such a simple fabrication process leads to significant sensitivity variability among sensors, hindering their adoption in precision-demanding medical applications. To tackle this issue, we evaluated the textile sensors for the classification of gait phases over three machine learning algorithms for time-series signals, namely, random forest (RF), time series forest (TSF), and multi-representation sequence learner (Mr-SEQL). Training and testing signals were generated from participants wearing the sensing pants in a test run under l...
CJ: An intelligent robotic head based on deep learning for HRI
2018 IEEE International Conference on Automation/XXIII Congress of the Chilean Association of Automatic Control (ICA-ACCA), 2018
The aim of this article is the design and construction of a robotic head intended for classificat... more The aim of this article is the design and construction of a robotic head intended for classification and recognition applications using algorithms based on Deep Learning. The robotic head works by performing a classification on a number of 1000 different objects thanks to the use of a convolutional neural network (CNN) trained with ImageNet. On the other hand, applying the transfer learning technique in a CNN, allows the recognition of faces on a database of 185 people. The robotic structure also has a voice recognition system, which allows the user to interact with the robot using different voice commands. The construction of the structure is mainly divided into 3D printed components, servomotors, and an Arduino UNO microcontroller, which oversees all the movement that the structure performs.
Due to the increase in complexity in autonomous vehicles, most of the existing control systems ar... more Due to the increase in complexity in autonomous vehicles, most of the existing control systems are proving to be inadequate. Reinforcement Learning is gaining traction as it is posed to overcome these difficulties in a natural way. This approach allows an agent that interacts with the environment to get rewards for appropriate actions, learning to improve its performance continuously. The article describes the design and development of an algorithm to control the position of a wheeled mobile robot using Reinforcement Learning. One main advantage of this approach concerning traditional control algorithms is that the learning process is carried out automatically with a recursive procedure forward in time. Moreover, given the fidelity of the model for the particular implementation described in this work, the whole learning process can be carried out in simulation. This fact avoids damages to the actual robot during the learning stage. It shows that the position control of the robot (or similar specific tasks) can be done without the need to know the dynamic model of the system explicitly. Its main drawback is that the learning stage can take a long time to finish and that it depends on the complexity of the task and the availability of adequate hardware resources. This work provides a comparison between the proposed approach and traditional existing control laws in simulation and real environments. The article also discusses the main effects of using different controlled variables in the performance of the developed control law. INDEX TERMS Mobile robot, position control, reinforcement learning.
Human activity recognition has attracted the attention of researchers around the world. This is a... more Human activity recognition has attracted the attention of researchers around the world. This is an interesting problem that can be addressed in different ways. Many approaches have been presented during the last years. These applications present solutions to recognize different kinds of activities such as if the person is walking, running, jumping, jogging, or falling, among others. Amongst all these activities, fall detection has special importance because it is a common dangerous event for people of all ages with a more negative impact on the elderly population. Usually, these applications use sensors to detect sudden changes in the movement of the person. These kinds of sensors can be embedded in smartphones, necklaces, or smart wristbands to make them ''wearable'' devices. The main inconvenience is that these devices have to be placed on the subjects' bodies. This might be uncomfortable and is not always feasible because this type of sensor must be monitored constantly, and can not be used in open spaces with unknown people. In this way, fall detection from video camera images presents some advantages over the wearable sensor-based approaches. This paper presents a vision-based approach to fall detection and activity recognition. The main contribution of the proposed method is to detect falls only by using images from a standard video-camera without the need to use environmental sensors. It carries out the detection using human skeleton estimation for features extraction. The use of human skeleton detection opens the possibility for detecting not only falls but also different kind of activities for several subjects in the same scene. So this approach can be used in real environments, where a large number of people may be present at the same time. The method is evaluated with the UP-FALL public dataset and surpasses the performance of other fall detection and activities recognition systems that use that dataset. INDEX TERMS Fall detection, deep learning, human skeleton.
This work presents the development and implementation of a distributed navigation system based on... more This work presents the development and implementation of a distributed navigation system based on computer vision. The autonomous system consists of a wheeled mobile robot with an integrated colour camera. The robot navigates through a laboratory scenario where the track and several traffic signals must be detected and recognized by using the images acquired with its on-board camera. The images are sent to a computer server that processes them and calculates the corresponding speeds of the robot using a cascade of trained classifiers. These speeds are sent back to the robot, which acts to carry out the corresponding manoeuvre. The classifier cascade should be trained before experimentation with two sets of positive and negative images. The number of images in these sets should be considered to limit the training stage time and avoid overtraining the system.
Anomaly detection refers to the problem of finding patterns in data that do not conform to expect... more Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior. These off-normal patterns are often referred to as anomalies, outliers, discordant observations, or exceptions in different application domains. The importance of anomaly detection is due to the fact that anomalies in data frequently involve significant and critical information in many application domains. In the particular case of nuclear fusion, there are a wide variety of anomalies that could be related to particular plasma behaviors, such as disruptions or L-H transitions. In the case of unknown anomalies, they probably represent the major proportion with respect to the total anomalies that can be found in fusion. Whether the anomaly is known or not, all the anomalies in a nuclear fusion device should be detected by using the same approach, i.e. the physical state of the plasma during a shot should be reflected in some of the thousands acquired signals. In this article, we study the application of Deep Learning and a particular recurrent neural network called LSTM to detect anomalies by forecasting in a discharge.
Competency-based education is becoming increasingly adopted by higher education institutions all ... more Competency-based education is becoming increasingly adopted by higher education institutions all over the world. This paper presents a framework that assists instructors in this pedagogical paradigm and its corresponding open-source implementation. The framework supports the formal definition of competency assessment models and the students' evaluation under these models. It also provides distinct learning analytics for identifying course shortcomings and validating corrective actions instructors have introduced in a course. Finally, this paper reports the benefits of applying our framework to an engineering course at the Pontifical Catholic University, Valparaíso, Chile for three years. INDEX TERMS Competency-based education, course assessment, course monitoring, learning outcome.
Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calib... more Proximity sensors are broadly used in mobile robots for obstacle detection. The traditional calibration process of this kind of sensor could be a time-consuming task because it is usually done by identification in a manual and repetitive way. The resulting obstacles detection models are usually nonlinear functions that can be different for each proximity sensor attached to the robot. In addition, the model is highly dependent on the type of sensor (e.g., ultrasonic or infrared), on changes in light intensity, and on the properties of the obstacle such as shape, colour, and surface texture, among others. That is why in some situations it could be useful to gather all the measurements provided by different kinds of sensor in order to build a unique model that estimates the distances to the obstacles around the robot. This paper presents a novel approach to get an obstacles detection model based on the fusion of sensors data and automatic calibration by using artificial neural networks.
Remote Interoperability Protocol: A bridge between interactive interfaces and engineering systems**This work has been funded by the National Plan Project DPI2012- 31303 of the Spanish Ministry of Science and Innovation and FEDER funds
IFAC-PapersOnLine, 2015
Abstract The process of building remote or virtual laboratories to be deployed via Internet usual... more Abstract The process of building remote or virtual laboratories to be deployed via Internet usually involves communication between different software tools. Very often, there is a separation between the software which interfaces with the model or real system, and the software responsible of providing the student with an interactive and visual representation of the data provided by the engineering system. Abstracting the way these two elements communicate with each other from the particular implementation, the requirements are frequently the same: connection and session control, data transmission control, user interaction handling, etc. This work describes a generic protocol to interoperate remotely any kind of engineering software. The solution proposes to encapsulate all the communication issues into an interoperability API that can be implemented in many different systems. In order to show the flexibility of such API, an implementation to interoperate MATLAB from Java user interfaces via JSONRPC is explained in detail.
6th Chilean Conference on Pattern Recognition (CCPR), 2014
Thermal Face Recognition over time is a difficult challenge due to faces varies with different fa... more Thermal Face Recognition over time is a difficult challenge due to faces varies with different factor such as metabolism or ambient conditions. Thus, the aim of this work is to improve recognition rates of thermal faces acquired in time lapse mode, since the results available in other articles are not entirely satisfactory in this modality, which is mainly due to the large variation in the thermal characteristics of the faces in time lapses. To improve the recognition rates the approach called "Sparse Representation" was chosen. This method represents an input image as a linear combination of a dictionary composed of images of different subjects and a vector of sparse coefficients. The results are obtained using the two sets of UCHThermalFace database. The method shows high performance in the time lapse for thermal images.
A Practical Demonstration of Reset Control with the Ball and Hoop System
The new information technologies provide great opportunities in control education. One of them is... more The new information technologies provide great opportunities in control education. One of them is the use of remote control laboratories to teach the behaviour of control systems. This paper describes an approach to create interactive remote laboratories. Two main software tools are used: Simulink and Easy Java Simulations. The first one is a widely used tool in the control community, whereas the second one is an authoring tool, designed to build interactive applications in Java without special programming skills. The remote laboratories created by this approach give to students the opportunity to perform experiments with real equipments from anywhere, at anytime, and at their own pace. The paper ends with an empirical study of this approach from a pedagogical point of view.
Information Retrieval and Classification with Wavelets and Support Vector Machines
Lecture Notes in Computer Science, 2005
Abstract. Since fusion plasma experiment generates hundreds of sig-nals. In analyzing these signa... more Abstract. Since fusion plasma experiment generates hundreds of sig-nals. In analyzing these signals it is important to have automatic mecha-nisms for searching similarities and retrieving of specific data in the wave-form database. Wavelet transform (WT) is a transformation ...
Dynamic clustering and neuro-fuzzy identification for the analysis of fusion plasma signals
Abstract Measurements in long pulse devices like ITER require the use of intelligent techniques t... more Abstract Measurements in long pulse devices like ITER require the use of intelligent techniques to detect interesting events and anomalous behaviors within a continuous data flow. This detection will trigger the execution of some experimental procedures such as: increasing sampling rates, starting data sampling in additional channels or notifying the event to other diagnostics. In a first approach, an interesting event can be any non-average behavior in the expected temporal evolution of the waveforms. Therefore, a model of the ...
The paper presents the development of interactive real-time control labs using Easy Java Simulati... more The paper presents the development of interactive real-time control labs using Easy Java Simulations (Ejs). Ejs is a free software tool that allows rapid creation of interactive simulations in Java. A new TrueTime-based kernel has been designed in Ejs in order to create multitasking real-time system simulations as well as soft real-time applications. The main features of these new capabilities are presented.
International Journal of Computers Communications & Control, 2013
This paper introduces the concept of "augmented reality" as a novel way to enhance visualization ... more This paper introduces the concept of "augmented reality" as a novel way to enhance visualization in remote laboratories for engineering education. In a typical remote experimentation session, students get access to a remote real plant located at the laboratory to carry out their assignments. Usually, the graphical user interface allows users to watch the equipment by video stream in live. However, in some cases, visual feedback by video stream could be enhanced by means of augmented reality techniques, which mix together in one image, the video stream and computer generated data. Such mixture produces an added value to remote experimentation, increasing the sense of presence and reality, and helping to understand much better the concepts under study. In this work, a Java-based approach to be used in the remote experimentation context for pedagogical purposes is presented. Firstly, a pure Java example is given to readers (including the source code) and then, a more sophisticated example using a Java-based open source tool known as Easy Java Simulations is introduced. This latter option takes advantage of a new developed component, called camimage, which is an easy-to-use visual element that allows authors to capture video stream from IP cameras in order to mix real images with computer generated graphics.
In this paper, a synergy of advanced signal processing and soft computing strategies is applied i... more In this paper, a synergy of advanced signal processing and soft computing strategies is applied in order to identify different types of human brain tumors, as a help to confirm the histological diagnosis of experts and consequently to facilitate the decision about the correct treatment or the necessity of an operation. A computational tool has been developed that merges, on the one hand, wavelet transform to reduce the size of the biomedical spectra and to extract the main features, and on the other hand, Support Vector Machine and Neural Networks to classify them. The influence of some of the configuration parameters of each of those soft computing techniques on the clustering is analyzed. These two methods and another one based on medical knowledge are compared. The classification results obtained by these computational tools are promising specially taking into account that medical knowledge has not been considered and that the number of samples of each class is very low in some cases.
Uploads
Papers by Gonzalo Farias