techreports by Manuel Valenzuela-Rendón

This report presents a new approach to the binary, black-box optimization, which has traditionall... more This report presents a new approach to the binary, black-box optimization, which has traditionally been attacked using stochastic methods. BODI (Black-box optimization by Deterministic Identification) is a family of deterministic algorithms that identify the objective function, building an exact model of it, and then obtain the optima by examining this model without any further function evaluations. In this report, BODI algorithms are presented that optimize functions that can be modeled as a summation of subfunctions of bounded order, and that can be non-overlapping, overlapping in a structured manner, or randomly overlapping. BODI algorithms do not use a population of potential solutions, do not rely on statistical estimation of hyperplane evaluations or linkage among variables. Nevertheless, the BODI algorithms are competitive with other competent algorithms like the fast messy Genetic Algorithms (fmGA) and the hierarchical Bayesian Optimization Algorithm (hBOA). Since BODI algorithms are deterministic, they find the optimal of the objective function in every single run. Additionally, BODI algorithms can find all the optima at the same time. This report presents three future avenues of research for the BODI approach.
Developing Intelligent Systems involves artificial intelligence approaches including artificial n... more Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term "deep"; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: hand- written digits (benchmark known as MNIST) and speech recognition.
Papers by Manuel Valenzuela-Rendón
The Space Station Freedom will require the supply of items in a regular fashion. A schedule for t... more The Space Station Freedom will require the supply of items in a regular fashion. A schedule for the delivery of these items is not easy to design due to the large span of time involved and the possibility of cancellations and changes in shuttle flights. This paper presents the basic concepts of a genetic algorithm model, and also presents the results of an effort to apply genetic algorithms to the design of propellant resupply schedules. As part of this effort, a simple simulator and an encoding by which a genetic algorithm can find near optimal schedules have been developed. Additionally, this paper proposes ways in which robust schedules, i.e., schedules that can tolerate small changes, can be found using genetic algorithms.
International Conference on Genetic Algorithms, 1991
The fuzzy classifier system (FCS) combines the ideas of fuzzy logic controllers (FLC's) and l... more The fuzzy classifier system (FCS) combines the ideas of fuzzy logic controllers (FLC's) and learning classifier systems (LCS's). It brings together the expressive powers of fuzzy logic as it has been applied in fuzzy controllers to express relations between continuous variables, and the ability of LCS's to evolve co-adapted sets of rules. The goal of the FCS is to develop a rule-based system capable of learning in a reinforcement regime, and that can potentially be used for process control.

Cognitive Science, 2017
Recent Machine Learning systems in vision and language processing have drawn attention to single-... more Recent Machine Learning systems in vision and language processing have drawn attention to single-word vector spaces, where concepts are represented by a set of basic features or attributes based on textual and perceptual input. However, such representations are still shallow and fall short from symbol grounding. In contrast, Grounded Cognition theories such as CAR (Concept Attribute Representation; Binder et al., 2009) provide an intrinsic analysis of word meaning in terms of sensory, motor, spatial, temporal, affective and social features, as well as a mapping to corresponding brain networks. Building on this theory, this research aims to understand an intriguing effect of grounding, i.e. how word meaning changes depending on context. CAR representations of words are mapped to fMRI images of subjects reading different sentences, and the contributions of each word determined through Multiple Linear Regression and the FGREP nonlinear neural network. As a result, the FGREP model in pa...
wseas.us
This paper proposes a new non-generational genetic algorithm for multiobjective optimization. The... more This paper proposes a new non-generational genetic algorithm for multiobjective optimization. The novelty in this approach is the use of a moving average to evaluate two things: the number of dominated individuals and the sharing function. Moreover, the proposed solution does less number of comparisons when compared with similar techniques and approaches. This solution has been evaluated on three different problems that are found in the literature and compared to other approaches to tackle them. The results that one sees point to the idea that the proposed algorithm is a feasible and simpler way to deal with multiobjective problems.

Neural Computing and Applications, 2017
This work introduces a new algorithmic trading method based on evolutionary algorithms and portfo... more This work introduces a new algorithmic trading method based on evolutionary algorithms and portfolio theory. The limitations of traditional portfolio theory are overcome using a multi-period definition of the problem. The model allows the inclusion of dynamic restrictions like transaction costs, portfolio unbalance, and inflation. A Monte Carlo method is proposed to handle these types of restrictions. The investment strategies method is introduced to make trading decisions based on the investor’s preference and the current state of the market. Preference is determined using heuristics instead of theoretical utility functions. The method was tested using real data from the Mexican market. The method was compared against buy-and-holds and single-period portfolios for metrics like the maximum loss, expected return, risk, the Sharpe’s ratio, and others. The results indicate investment strategies perform trading with less risk than other methods. Single-period methods attained the lowest performance in the experiments due to their high transaction costs. The conclusion was investment decisions that are improved when information providing from many different sources is considered. Also, profitable decisions are the result of a careful balance between action (transaction) and inaction (buy-and-hold).
SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483), 2003
This article presents the use of the simulated annealing algorithm to solve the waste minimizatio... more This article presents the use of the simulated annealing algorithm to solve the waste minimization problem in roll cutting programming, in this case, paper. Client orders, which vary in weight, width, and external and internal diameter, are fully satisfied; and no cuts to inventory are additionally generated, unless, they are specified. Once an optimal cutting program is obtained, the algorithm

Lecture Notes in Computer Science, 2005
David Goldberg has defined a competent genetic algorithm as one which “can solve hard problems, q... more David Goldberg has defined a competent genetic algorithm as one which “can solve hard problems, quickly, accurately, and reliably.” Among other competent genetic algorithms that have been developed are the Bayesian optimization algorithm (BOA), the fast messy genetic algorithm (fmGA), and the linkage learning genetic algorithm (LLGA). These algorithms have been tested on problems of bounded difficulty that are additive separable formed by deceptive subproblems of order not greater than k, where k competent genetic algorithms are stochastic, and thus, can only be assured of attaining optimality in a probabilistic sense. In this paper, we develop a deterministic algorithm that solves to optimality all linearly decomposable problems in a polynomial number of function evaluations with respect to the maximum size of the subproblems, k. The algorithm presented does not rely on a population, does not recombine individuals or apply any other genetic operator. Furthermore, because it is deterministic, the number of function evaluations required to find the optimum can be known in advance. The algorithm presented solves both the linkage and the optimization problems by finding the disjoint sets of related variables and the optimal values of these variables at the same time. The fact that such an algorithm can be devised has important implications for the design of GA-hard problems, and the development and evaluation of genetic optimization algorithms.

International Conference on Scientific Computing, 2007
In this paper, a novel higher order accurate scheme, namely high order compact flowfield dependen... more In this paper, a novel higher order accurate scheme, namely high order compact flowfield dependent variation (HOC-FDV) method has been used to solve one-dimensional problems. The method is fourth order accurate in space and third order accurate in time. Four numerical problems; the nonlinear viscous Burger's equation, transient Couette flow, the shock tube (Sod problem) and the interaction of two blast waves are solved to test the accuracy and the ability of the scheme to capture shock waves and contact discontinuities. The solution procedure consists of tri-diagonal matrix operations and produces an efficient solver. The results are compared with analytical solutions, the original FDV method, and other standard second order methods. The results also show that HOC-FDV scheme provides more accurate results and gives excellent shock capturing capabilities.

The central idea of coevolution lies in the fact that the fitness of an individual depends on its... more The central idea of coevolution lies in the fact that the fitness of an individual depends on its performance against the current individuals of the opponent population. However, coevolution has been shown to have problems [2,5]. Methods and techniques have been proposed to compensate the flaws in the general concept of coevolution [2]. In this article we propose a different approach to implementing coevolution, called incremental coevolutionary algorithm (ICA) in which some of these problems are solved by design. In ICA, the importance of the coexistance of individuals in the same population is as important as the individuals in the opponent population. This is similar to the problem faced by learning classifier systems (LCSs) [1,4]. We take ideas from these algorithms and put them into ICA. In a coevolutionary algorithm, the fitness landscape depends on the opponent population, therefore it changes every generation. The individuals selected for reproduction are those more promisin...

Advances in Soft Computing
Wilson’s XCS represents and stores the knowledge it has acquired from an environment as a set of ... more Wilson’s XCS represents and stores the knowledge it has acquired from an environment as a set of classifiers. In the XCS, don’t cares (#) may be used in the conditions of classifiers to express generalization. This paper is focused on the representation of knowledge with the minimal number of classifiers. For this purpose, a new process called fusion is implemented. Fusion promotes the emergence of more generalized yet accurate classifiers and the reduction of the number of macroclassifiers. Furthermore, to get even more compact rules sets, the implementation of the # symbol in the action of the classifiers is proposed; this allows generalization when possible, and the existence non-competing classifiers in the population if a state has multiple equally correct actions that can be performed. The proposed modified generalized extended XCS (gXCS) was compared with the XCS on the Woods2 environment and a modification of this environment, modified-Woods2, that has locations where there are multiple equally good actions. The performances of XCS and gXCS are very similar; yet, gXCS obtains more parsimonious rule sets. Furthermore, gXCS can find good rule sets even when the probability of # is set zero, contrary to the XCS.
ArXiv, 2016
Developing Intelligent Systems involves artificial intelligence approaches including artificial n... more Developing Intelligent Systems involves artificial intelligence approaches including artificial neural networks. Here, we present a tutorial of Deep Neural Networks (DNNs), and some insights about the origin of the term "deep"; references to deep learning are also given. Restricted Boltzmann Machines, which are the core of DNNs, are discussed in detail. An example of a simple two-layer network, performing unsupervised learning for unlabeled data, is shown. Deep Belief Networks (DBNs), which are used to build networks with more than two layers, are also described. Moreover, examples for supervised learning with DNNs performing simple prediction and classification tasks, are presented and explained. This tutorial includes two intelligent pattern recognition applications: hand- written digits (benchmark known as MNIST) and speech recognition.

The use of computing systems is nowadays something that is taken for granted. One just needs to l... more The use of computing systems is nowadays something that is taken for granted. One just needs to look around and will easily see that there is a computation process going on in almost every direction. Moreover, now such processes are not just restricted to personal computers but to devices such as cellular phones, PDAs, laptops, etc. Furthermore, these devices are not isolated units of processes but are interconnected and can send and receive information at any time, anywhere. The demands that now people and current business models place on computing systems go from running a simple application, where the hardware was not built specifically for such application, to a cooperative network where all constituents are using a variety of systems and commands. Managing all these networked devices as a whole in a robust, safe, secure and transparent manner demands a lot of resources and time. This article presents the steps that have been taken in order to propose a new system architecture t...
Nature possesses the incredible ability to generate a number of autonomous processes which amaze ... more Nature possesses the incredible ability to generate a number of autonomous processes which amaze scientists and researchers. This paper makes also use of a phenomenon seen in nature which is called excitable media. It is believed that, using this mechanism, self-star properties can be achieved. In this document a novel approach using such occurrence is proposed. An environment simulating an excitation where autonomous agents live was created, and the way this excitation makes these agents behave is presented. The results show that using this methodology is a feasible way towards autonomic systems.
Uploads
techreports by Manuel Valenzuela-Rendón
Papers by Manuel Valenzuela-Rendón