Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
In this paper we study and characterize the controllability of a constant 2-cells CNN (Cellular Neural Network) with feedback resembling a symmetric or antisymmetric matrix and input with all entries set to zero except its first element . We characterize and give a precise description of the control in each case. This problem has been attacked already in order to study complete stability and in the seek of chaotic attractor; but this time the controllability is addressed.
International Journal of Robotics and Control Systems, 2022
The application of deep learning technology has increased rapidly in recent years. Technologies in deep learning increasingly emulate natural human abilities, such as knowledge learning, problem-solving, and decision-making. In general, deep learning can carry out self-training without repetitive programming by humans. Convolutional neural networks (CNNs) are deep learning algorithms commonly used in wide applications. CNN is often used for image classification, segmentation, object detection, video processing, natural language processing, and speech recognition. CNN has four layers: convolution layer, pooling layer, fully connected layer, and non-linear layer. The convolutional layer uses kernel filters to calculate the convolution of the input image by extracting the fundamental features. The pooling layer combines two successive convolutional layers. The third layer is the fully connected layer, commonly called the convolutional output layer. The activation function defines the output of a neural network, such as 'yes' or 'no'. The most common and popular CNN activation functions are Sigmoid, Tanh, ReLU, Leaky ReLU, Noisy ReLU, and Parametric Linear Units. The organization and function of the visual cortex greatly influence CNN architecture because it is designed to resemble the neuronal connections in the human brain. Some of the popular CNN architectures are LeNet, AlexNet and VGGNet.
The Cellular Neural Networks (CNN) model is now a paradigm of ceIlular analog programmable multidimensional processor array with distributed local logic and memory. CNNs consist of many paraIlel analogue processors computing in real time. One desirable feature is that these processors arranged in a two dimensional grid, only have local connections, which lend themselves easily to VLSI implementations. The connections between these processors are determined by a cIoning template, which describes the strength of nearest-neighbour interconnections. The cloning templates are spaceinvariant, meaning that a11 the processors have the same relative connections. In this paper first we describe the architecture of CNN. Next, a new application of CNN using them for the 3D scene analysis is studied. .
The assumption behind this system is that it is essential to understand information transfer in a networked system if the system is to develop an internal worldview essential for creation of meaning. The pivoting idea in Ray Kurzweil’s “How to Create a Mind” is “a basic ingenious mechanism for recognizing, remembering, and predicting a pattern, repeated in the neo-cortex hundreds of millions of times, and organized in a hierarchy of increasing levels of abstraction”, and Hofstadter’s investigations in “Surfaces and Essences” make it clear that “perception is inseparable from high-level cognition”. The goal of the system described in this article, is to create an autonomous deep learning system that can deal with the complete set of possible states of 2D changing patterns. In this system fast-data-inputs are classified and memorized in modules composed of cellular networks housed in a multilevel architecture. Queries, recall, recognition, and prediction are all part of the system, whose essence lies in distributed memory, asynchronous independent cellular calculations, and internal up and down feedback. The problem of state space is resolved by dividing patterns into local primitives capable of dealing with total local state space, and sharing that information with surrounding local units and higher and lower levels.
Neural Networks, 2009. IJCNN 2009. …, 2009
improvement translates to faster image processing algorithms compared to traditional CPU-based algorithms. topology uniform 2D grid usually feed-forward processing element dynamic equations nonlinear weighted sum common uses image processing classification, control CNNs are composed of many cells arranged in a grid, M. To simplify discussion, we will assume these grids are always square with dimensions m x m for m 2 cells. Each cell in the grid is denoted Vij for i, j EM. Thus each cell is labeled from VII to V m m . We define two types of cell depending on their location in the grid: inner cells and boundary cells. Boundary cells occur near the edges of the grid; inner cells occur everywhere else. Boundary cells necessarily have different properties than inner cells because they are connected to fewer neighboring cells. Each inner cell is the center of a neighborhood N i j of n x n cells. By this definition, n must be odd and is usually n == 3. By convention, each cell in a given neighborhood is assigned an index k from 1..n 2 , with k == 1 denoting the center cell, as shown in Figure . Thus any given center cell Vij == VI belongs Cellular neural networks (CNNs) are similar to well-known artificial neural networks (ANNs) in that they are composed of many distributed processing elements called "cells", which are connected in a network; however, there are several important differences between CNNs and ANNs (see Table ). Instead of the usual feed-forward, layered architecture seen in many types of neural networks, CNNs were designed to operate in a two-dimensional grid where each processing element (cell) is connected to neighboring cells in the grid. The cells comprising a CNN communicate by sending signals to neighboring cells in a manner similar to ANNs, but the signals are processed by each cell in a unique way. Specifically, CNN cells maintain a state which evolves through time due to differential (or difference) equations dependent on the cell's inputs and feedback. ANNs CNNs Table I CNNs, ANNs COMPARED
International Journal of Bifurcation and Chaos, 2005
The paper stresses the universal role that Cellular Nonlinear Networks (CNNs) are assuming today. It is shown that the dynamical behavior of 3D CNN-based models allows us to approach new emerging problems, to open new research frontiers as the generation of new geometrical forms and to establish some links between art, neuroscience and dynamical systems.
Physics Letters A, 2005
A Cellular Nonlinear Network (CN N) based on uncoupled nonlinear oscillators is proposed for image processing purposes. It is shown theoretically and numerically that the contrast of an image loaded at the nodes of the CN N is strongly enhanced, even if this one is initially weak. An image inversion can be also obtained without reconfiguration of the network whereas a gray levels extraction can be performed with an additional threshold filtering. Lastly, an electronic implementation of this CNN is presented.
International Journal of Circuit Theory and Applications, 2005
This paper presents a cellular neural network (CNN) scheme employing a new non-linear activation function, called trapezoidal activation function (TAF). The new CNN structure can classify linearly non-separable data points and realize Boolean operations (including eXclusive OR) by using only a single-layer CNN. In order to simplify the stability analysis, a feedback matrix W is deÿned as a function of the feedback template A and 2D equations are converted to 1D equations. The stability conditions of CNN with TAF are investigated and a su cient condition for the existence of a unique equilibrium and global asymptotic stability is derived. By processing several examples of synthetic images, the analytically derived stability condition is also conÿrmed. 394 E. BILGILI,İ. C. G OKNAR AND O. N. UCAN CNN stability is analysed for the standard activation function as in References , (v) global exponential stability conditions of CNN via a new Lyapunov function are stated in Reference [9]. It is well known that the standard uncoupled CNN single-layer structures, extremely useful for realizing Boolean functions, are not capable of classifying linearly nonseparable data. The parity is a binary function of the inputs, which returns a high output if the number of inputs set to 1 is odd and a low output if that number is even. Therefore, for n inputs, the parity problem consists of being able to divide the n-dimensional input space into disjoint decision regions such that all input patterns in the same region yield the same output and, thus is linearly non-separable. Uncoupled CNN can only classify linearly separable data, that is can only separate the input space with hyper-planes [10]. Recently, a single perceptron-like cell with: (i) double threshold, (ii) implemented using only ÿve MOS transistors, (iii) capable of classifying data which are not linearly separable has been reported in References .
IEEE Transactions on Circuits and Systems I: Regular Papers, 2004
We propose a programmable architecture for a single instruction multiple data image processor that has its foundation on the mathematical framework of a simplicial cellular neural networks. We develop instruction primitives for basic image processing operations and show examples of processing binary and gray scale images. Fabricated in deep submicron CMOS technologies, the complexity of the digital circuits and wiring in each cell is commensurate with pixel level processing.
2010 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA 2010), 2010
Approaching the limits of scaling down of CMOS circuits where transistors can switch faster and faster transmitting information between different areas of an integrated circuit has great importance. The speed of signals are determined by the physical properties of the medium therefore the distance between the elements should be decreased to improve performance. Array processors are a good candidate to solve this problem. Similar approach is required on today high performance field programmable logic devices where wire delay dominates over gate (LUT) delay. Centralized control unit of a configurable accelerator might become a performance bottleneck on the current state-of-the-art FPGAs. In the paper a process network inspired approach is given to create distributed control units. The advantage of the proposed method will be shown by designing a complex multi-layer array computing architecture to emulate the operation of a mammalian retina in real time.
In this contribution, we propose the use of Cellular Neural Networks as an application for the image segmentation of cinematographic image sequences. The proposed approach is based on a Cellular Neural network cost function that takes into account motion and colour. Cellular Neural Networks are of particular interest for hardware implementation due to the inherent parallelism and initial results using an FPGA simulator are also presented.
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2000
The resonant tunneling diode (RTD) has found numerous applications in high-speed digital and analog circuits due to the key advantages associated with its folded back negative differential resistance (NDR) current-voltage (I-V) characteristics as well as its extremely small switching capacitance. Recently, the RTD has also been employed to implement high-speed and compact cellular neural/nonlinear networks (CNNs) by exploiting its quantum tunneling induced nonlinearity and symmetrical I-V characteristics for both positive and negative voltages applied across the anode and cathode terminals of the RTD. This paper proposes an RTD-based CNN architecture and investigates its operation through driving-point-plot analysis, stability and settling time study, and circuit simulation. Full-array simulation of a 128 128 RTD-based CNN for several image processing functions is performed using the Quantum Spice simulator designed at the University of Michigan, where the RTD is represented in SPICE simulator by a physics based model derived by solving Schrödinger's and Poisson's equations self-consistently. A comparative study between different CNN implementations reveals that the RTD-based CNN can be designed superior to conventional CMOS technologies in terms of integration density, operating speed, and functionality. Index Terms-Resonant tunneling diode (RTD), cellular neural/ nonlinear network (CNN), full array simulation, settling time analysis. I. INTRODUCTION S INCE its invention by Chua and Yang in 1988 [1], [2], the cellular neural/nonlinear network (CNN) has been much acclaimed as a powerful back-end analog array processor, capable of accelerating various computation-intensive tasks in image processing, pattern formation and recognition, motion detection, robotics, and various other real-time problem solving that requires complex computation [3]. In such real-world applications, massively parallel computation of spatial data over a 2-D surface is needed to process data in real-time, albeit computational functions are rather simple algebraic operations and each array element concurrently performs identical operation. The features of CNN that make it an easily implementable parallel computing architecture relative to fully connected neural networks are mentioned in [4] but reiterated here:
IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 2003
This paper presents a novel class of cellular neural networks (CNNs), where output of a cell in the CNN is given by the piecewise-linear (PWL) function having multiple constant regions or a quantization function. CNNs with one of these output functions allow us to extend CNNs to image processing with multiple gray levels. Since each cell of the original CNN has the PWL output function with two saturation regions, the image-processing tasks are mainly developed for black and white output images. Hence, the proposed architecture will extend the promising nature of the CNN further. Moreover, the hysteresis characteristics are introduced for these functions, which make tolerance to a noise robust. It is demonstrated mathematically that under a mild assumption, the stability of the CNN which has an output function with hysteresis characteristics is guaranteed, and the impressive simulation results are also presented.
IEEE Circuits and Systems Magazine, 2001
The paradigm of Cellular Neural Networks (CNNs)is going to achieve a complete maturity. In fact, from a methodological point of view, important results on their digitally programmable analog dynamics have been gained, completed with thousands of application routines. This has encouraged the spreading of a great number of applications in the most different disciplines. Moreover, their structure, tailor made for VLSI realization, has led to the production of some chip prototypes that, embedded in a computational infrastructure, have produced the first analogic cellular computers. This completes the framework and makes it possible to realize complex spatio-temporal and filtering tasks on a time scale of microseconds. In this paper some sketches on the main aspects of CNNs, from the formal to the hardware prototype point of view, are presented together with some appealing applications to illustrate complex image, visual and spatio-temporal dynamics processing
This study employed the use of Cellular Neural Networks (CNN) for Edge Detection in images due to its high operational speed. The process of edge detection is unavoidable in many image processing tasks such as obstacle detection and satellite picture processing .The conventional edge detector models such as Sobel Operator, Robert Cross, among which Canny is the best, have high computational time. The CNN Model is a class of Differential Equation that has been known to have many application areas and high operational speed. The work investigated four parameters: resolutions, processing time, false alarm rate, and usability for performance evaluation. The CNN Model was modified by using hyperbolic tangent (tanh x) and Von Neumann Neighborhood. The modified CNN Model and enhanced Canny Model were implemented using MATLAB 7.0 running on Pentium III and 128 MB RAM Personal Computer. A series of images served as input for both Canny and modified CNN Model. With several images tested, the overall results indicated that the two models have similar resolutions with average computational time of 1.1078 seconds and 2.293 seconds for CNN-based and Enhanced Canny Model respectively. The hyperbolic entry a 22 of the cloning template A made our work fully controllable since max(tanh x) = +1 and min(tanh x) = -1. A consideration of the set of digital images showed that edge maps which result from Canny Model have adjacent boundaries that tend to merge. The CNN model obviously produced the optimum edge map with edges that are one-pixel wide and unbroken. The false alarm rate of noise variance probability 2 F P for which an edge detector can easily declare an edge pixel given that there is no edge, showed the value 0.8525 for CNN Model and 0.4595 for Canny Model. The CNN parameters can be adjusted and modeled to solve Partial Differential Equations and Maximum likelihood problems for edge detection while Canny Model cannot be easily adjusted for any other functionality. The CNN-based edge detector performed better than the popular canny operator in terms of the computational time required, usability, and false alarm rate.
This paper deals with the obtention of robust parameter configurations for DT-CNNs and for a class of CT-CNNs (here called CT-CNNs with Discrete Configurations, DC-CT-CNN), in the presence of additive and multiplicative implementation errors. Expressions that characterize the tolerance to both multiplicative and additive errors caused by circuit inaccuracies in DT-CNNs and DC-CT-CNNs VLSI implementation are first deduced. Taking into account those expressions it is proposed to obtain robust parameter configurations, by using a design process based on local rules, as the solution of a single linear programming problem. The process is applied to the generation of robust configuration for some tasks. The tolerance to errors of these configurations has been corroborated by simulations. The differences in parameter values and tolerance to errors, between the robust configuration obtained for solving a particular task in DT-CNNs and that obtained in DC-CT-CNNs, are given. 1.-INTRODUCTION The Cellular Neural Network model proposed by L. O. Chua and L. Yang [1] has been widely used for image processing tasks [2]. The time evolution of the state of a cell (neuron or pixel) c in an NxM-cell CNN is described by the differential equation [1]: where n denotes a generic cell belonging to the neighbourhood of cell c, N R (c), with radius equal to R (for example, N 1 (c) is the set of 3x3 cells centred on c, N 1 (c)={ c-N-1, c-N, c-N+1, c-1, c, c+1, c+N-1, c+N, c+N+1}). x c is the state of cell c, y n and u n are the output and the input, respectively, of the cell n, I is an offset term, and the matrices A and B are called feedback and control templates respectively. Some authors have proposed [3] a discrete-time (DT) version of CNN obtained by applying the Euler integration algorithm to discretise the cell state equation. The state of a cell c in an NxM-cell DT-CNN is described
2018
This paper presents implementation of a chaotic Cellular Neural Network (CNN) on Field Programmable Gate Array (FPGA). The network has two non-autonomous cells and exhibits chaotic behavior. In the implementation stage, Verilog Hardware Description Language (HDL) is used and discrete time model of the network is coded on Xilinx ISE Design Suite 13.2. It seems that the chaotic attractor can be used as entropy source or short key (seed) of chaos based random number generator design.
ISRN Machine Vision, 2012
An artificial cell is comprised of the most basic elements in a hierarchical system, that has minimal functionality, but general enough to obey the rules of "artificial life." The ability to replicate, organize hierarchy, and generalize within an environment is some of the properties of an artificial cell. We present a hardware artificial cell having the properties of generalization ability, the ability of self-organization, and the reproducibility. The cells are used in parallel hardware architecture for implementing the realtime 2D image convolution operation. The proposed hardware design is implemented on FPGA and tested on images. We report improved processing speeds and demonstrate its usefulness in an image filtering application. O16 [0 : 31] Conv out [0 : 31] Status Fltr 3 × 3 Figure 4: The hierarchical cell architecture for 2D convolution operator.
Nonlinear Analysis: Real World Applications, 2010
It is well known that one-dimensional cellular neural networks (1-D CNNs) with the template A = [1, 2, −1] can perform connected component detection (CCD). However this has been confirmed only by numerical and laboratory experiments. In this paper, sufficient conditions for 1-D CNNs to perform CCD are obtained through theoretical analysis. Main result shows that a wide class of templates including A = [1, 2, −1] can be used for CCD.
2011
An artificial cell is comprised of the most basic elements in a hierarchical system, that has minimal functionality, but general enough to obey the rules of "artificial life." The ability to replicate, organize hierarchy, and generalize within an environment is some of the properties of an artificial cell. We present a hardware artificial cell having the properties of generalization ability, the ability of self-organization, and the reproducibility. The cells are used in parallel hardware architecture for implementing the realtime 2D image convolution operation. The proposed hardware design is implemented on FPGA and tested on images. We report improved processing speeds and demonstrate its usefulness in an image filtering application.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.