Figure 1. Hodgkin and Huxley’s model circuit representation. It is described by these equations: Figure 6. Two types of excitatory (the first two) and two types of inhibitory neurons firing patterns (taken from http://www. izhikevich.org/publications/spikes.htm). yis a uniform random variable between 0 and 1 Izhikevich model became very popular because its simplicity allows for building networks consisting of thousands of suct neurons. While using it, however, we found that increasing the strength of the stimulus caused it to fire with higher and higher frequency (no upper bound). This is not biologically plausible as neurons cannot fire during the absolute refractory period, needed for restoration of their membrane potentials, no matter the strength of the input. We thus corrected the condition for the neuron firing (Strack, Jacobs and Cios, 2013) by changing it Paes Figure 7. A) unbounded firing of the original Izhikevich model neurons; B) firing of the neurons after Strack at el. (2014) modification accounting for absolute refractory periods. Figure 8. Comparison of the SAPR and STDP: the latter is fixed while the former changes depending on the shape of excitatory and inhibitory post synaptic functions of neurons. The difference between the two rules is that SAPR uses a function that is continuous and differentiable (important in several applications); it is also dynamic because it uses actual post-synaptic potential functions to modify the connection strengths between the neurons. In other words, the adjustments depend on the shape of SAPR, which in turn depends on the shape of the chosen postsynaptic functions in a given neural circuit. The left part of the SAPR function in Figure 8 (to the left of the y axis) is the chosen inhibitory PSP while the right part is the chosen excitatory PSP; see again the two function shapes in Figure 4. In contrast, the STDP rule uses a static function meaning that the adjustments are always the same; they do not depend on the shape of inhibitory/excitatory PSPs for a given At. Figure 9. A) illustration of how the simple and complex cells extract specific features from input images; B) implementation of how the features are extracted and aggregated (using three hidden layers) in Neocognitron to recognize digit 2 (both pictures are taken from Kandel et al. Principles of Neural Science, 5"" edition, 2013). The first researcher to design a direct precursor of DNN, using Hubel and Wiesel’s discoveries, was Fukushima (1980) who called his network Neocognitron. Figure 10 A) illustrates how key features of an image of letter A are first picked t by simple cells (S) and then aggregated by complex cells (C), in order to recognize letter A at the output. S-layer of simple cells extracts features from the previous stage in the hierarchy, while the C-layer of complex cells ensures tolerance for shifts of features extracted by the S-layer. Figure 10. A) Fukushima’s Neocognitron architecture, and B) LeCun’s convolutional neural network architecture. Figure 11. IRNN’s architecture: unsupervised part consists of the sensory and feature aggregating layers while the associative part is supervised. Figure 11 shows stacks of neurons represented by small balls. How they are generated and what they represent is explained in Figure 12. We see there three (hashed) subimages/windows of the three input images, which are clusterec using a novel image similarity measure (Cios and Shin, 1995). If the first two subimages are similar (as shown) they are clustered together in neuron n;. Since subimage 3 was found quite different from the subimages 1 and 2, it creates its own cluster, so the second neuron, np, is generated. The weights w, and wy, are initially set to the first subimage pixel values vector but are later updated to represent cluster center (thus representing an “average” subimage). At the end of scanning of entire images the result might be as the one shown in Figure 13. Notice, that at the center more neurons clusters) were created to represent image details, such as nose, eyes and mouth, while at the periphery where packground was about the same in all images only single neurons/clusters were needed. Figure 12. Explanation of clustering of subimages into a number of clusters/neurons. mm Figure 14. Architecture of the network of spiking neurons; (a) High-level block diagram. (b) Recurrent synaptic connections between the excitatory neurons in the feature extraction layer. (c) Synaptic connections between the excitatory neurons in the sensory/feature extraction layer and the inhibitory neurons in the feature extraction layer (taken from Shin et al. 2010 paper). The same process is repeated on the outputs of the sensory layer to aggregate the local features into more complex