Books by Seyed Hashem Davarpanah
Papers by Seyed Hashem Davarpanah

Journal of Computer Science, Jul 1, 2012
Problem statement: In many visions-based surveillance systems, the first step is accomplished by ... more Problem statement: In many visions-based surveillance systems, the first step is accomplished by detecting moving objects resulted from subtraction of the current captured frame from the extracted background. So, the results of these systems mainly depend on the accuracy of the background image. Approach: In this study, a proposed background extraction system is presented to model the background using a simple method, to initialize the model, to extract the moving objects and to construct the final background. Our model saves the history of each pixel separately. It uses the saved information to extract the background using a probability-based method. It updates the history of the pixel consequently and according to the value of that pixel in the current captured image. Results: Results of the experiments certify that not only the quality of the final extracted background is the best between four recently re-implemented methods, but also the time consumption of the extraction is acceptable. Conclusion: Since History-based methods use temporal information extracted from the several previous frames, they are less sensitive to noise and sudden changes for extracting the background image.

Journal of Computer Science, Nov 1, 2013
Shadows appear in many scenes. Human can easily distinguish shadows from objects, but it is one o... more Shadows appear in many scenes. Human can easily distinguish shadows from objects, but it is one of the challenges for shadow detection intelligent automated systems. Accurate shadow detection can be difficult due to the illumination variations of the background and similarity between appearance of the objects and the background. Color and edge information are two popular features that have been used to distinguish cast shadows from objects. However, this become a problem when the difference of color information between object, shadow and background is poor, the edge of the shadow area is not clear and the shadow detection method is supposed to use only color or edge information method. In this article a shadow detection method using both color and edge information is presented. In order to improve the accuracy of shadow detection using color information, a new formula is used in the denominator of original c 1 c 2 c 3. In addition using the hue difference of foreground and background is proposed. Furthermore, edge information is applied separately and the results are combined using a Boolean operator.

The International Arab Journal of Information Technology, 2016
Local Binary Pattern (LBP) is invariant to the monotonic changes in the grey scale domain. This p... more Local Binary Pattern (LBP) is invariant to the monotonic changes in the grey scale domain. This property enables LBP to present a texture descriptor that can be useful in applications dealing with the local illumination changes. However, the existing versions of LBP are not able to handle image illumination changes, especially in outdoor environments. These non-patterned illumination changes disturb performance of the background extraction methods. In this paper, an extended version of LBP which is called Back Ground Local Binary Pattern (BGLBP) is presented. BGLBP is designed for the background extraction application but it is extendable to the other areas as a texture descriptor. BGLBP is an extension of Direction LBP (D-LBP), Centre-Symmetric LBP (CS-LBP), Uniform Local Binary Pattern (ULBP), and RIU-LBP (RotationInvariant Uniform LBP) and it has been designed to inherit the positive properties of previous versions. The performance of BGLBP as a part of background extraction method is investigated.

There is little consensus on the corporate diversification-efficiency relationship in the diversi... more There is little consensus on the corporate diversification-efficiency relationship in the diversification literature. According to the corporate diversification, firms have a tendency to get more market share with diversifying in the local segment or in the international market. Theoretically, a contradictory exists between the profitable strategy and the value reducing strategy in the diversification strategy. In this paper, we measure firm’s efficiency by applying Data Envelopment Analysis (DEA) in manufacturing firms listed in Bursa Malaysia for five years. Meanwhile, a feed forward multilayer perceptron neural network is applied to model the mapping function between the input and output data to the efficiency score. Back propagation (BP) learning algorithm is applied to update network’s weights through minimizing the cost function, and the best topology of the network is conducted. The result of this study shows that there is a negative relationship between total product diversi...

Neural Computing and Applications, 2016
In a normal human brain, inter-hemispheric fissure separates the brain into the left and the righ... more In a normal human brain, inter-hemispheric fissure separates the brain into the left and the right hemispheres. In this paper, we model IF as a mid-sagittal surface on the input 3D brain MR image. For this purpose, we introduce a new method to extract MSS. In the proposed method, lacunarity is used to extract an initial symmetry plane, and then, fractal dimension is calculated in order to measure similarity degree between two brain hemispheres. Inside of each axial slice, a thin-plate spline surface is constructed based on the FD and intensity values, and a local optimization is applied to fit this TPS surface to the brain data using a robust least-median-of-squares estimator. Finally, MSS is modelled as a stack of the fitted TPSs, and the optimization is applied again in order to smooth the final MSS. MSS is the output of our method. The efficiency of the proposed method is evaluated using both simulated and real MR images and is compared to the state of the art. Our studies show that the proposed method discovers significant mid-sagittal surface with respect to the increased noise level and INU existence, in clinical images and pathological samples. This superiority is reasonable because of using FD and lacunarity being noise and INU independent and optimizing by TPS working locally.

Communications in Computer and Information Science, 2015
Bio-ontologies are characterized by large sizes, and there is a large number of smaller ontologie... more Bio-ontologies are characterized by large sizes, and there is a large number of smaller ontologies derived from them. Determining semantic correspondences across these smaller ones can be based on this “upper” ontology. To this end, we introduce a new fuzzy inference-based ontology matching approach exploiting upper ontologies as semantic bridges in the matching process. The approach comprises two main steps: first, a fuzzy inference-based matching method is used to determine the confidence values in the ontology matching process. To learn the fuzzy system parameters and to enhance the adaptability of fuzzy membership function parameters, we exploit a gradient discriminate learning technique. Second, the achieved results are then composed and combined to derive the final match result. The experimental results show that the performance of the proposed approach compared to one of the famous benchmark research is acceptable.

Lecture Notes in Computer Science, 2015
Ontology matching plays a crucial role to resolve semantic heterogeneities within knowledge-based... more Ontology matching plays a crucial role to resolve semantic heterogeneities within knowledge-based systems. However, ontologies contain a massive number of concepts, resulting in performance impediments during the ontology matching process. With the increasing number of ontology concepts, there is a growing need to focus more on large-scale matching problems. To this end, in this paper, we come up with a new partitioning-based matching approach, where a new clustering method for partitioning concepts of ontologies is introduced. The proposed method, called SeeCOnt, is a seeding-based clustering technique aiming to reduce the complexity of comparison by only using clusters’ seed. In particular, SeeCOnt first identifies and determines the seeds of clusters based on the highest ranked concepts using a distribution condition, then the remaining concepts are placed into the proper cluster by defining and utilizing a membership function. The SeeCOnt method can improve the memory consuming problem in the large-scale matching problem, as well as it increases the matching quality. The experimental evaluation shows that SeeCOnt, compared with the top ten participant systems in OAEI, demonstrates acceptable results.

Journal of Computer Science, 2010
Problem statement: To extract the moving objects, vision-based surveillance systems subtract the ... more Problem statement: To extract the moving objects, vision-based surveillance systems subtract the current image from a predefined background image. The efficiency of these systems mainly depends on accuracy of the extracted background image. It should be able to adapt to the changes continuously. In addition, especially in real-time applications the time complexity of this adaptation is a critical matter. Approach: In this study, to extract an adaptive background, a combination of blocking and multi-scale methods is presented. Because of being less sensitive to local movements, block-based techniques are proper to control the non-stationary objects' movements, especially in outdoor applications. They can be useful to reduce the effect of these objects on the extracted background. We also used the blocking method to intelligently select the regions which the temporal filtering has to be applied on. In addition, an amended multi-scale algorithm is introduced. This algorithm is a hybrid algorithm, a combination of some nonparametric and parametric filters. It uses a nonparametric filter in the spatial domain to initiate two primary backgrounds. In continue two adapted two-dimensional filters will be used to extract the final background. Results: The qualitative and quantitative results of our experiments certify not only the quality of the final extracted background is acceptable, but also its time consumption is approximately half in compare to the similar methods. Conclusion: Using Multi scaling filtering and applying the filters just to some selected nonoverlapped blocks reduce the time consumption of the extracting background algorithm.
2008 International Conference on Computer and Communication Engineering, 2008
Abstract Using query language for dealing with databases is always a professional and complex pro... more Abstract Using query language for dealing with databases is always a professional and complex problem. This complexity causes the userpsilas usage of data existing in database limits to use definite reports there are in some pre implemented softwares. However, you ...
International Conference on Artificial Intelligence, 2003
In this paper, a method to calculate Frenet apparatus of partially null and pseudo null curves in... more In this paper, a method to calculate Frenet apparatus of partially null and pseudo null curves in Minkowski space-time is presented.
ABSTRACT This paper presents a fuzzy system for Adaptive Cruise Control as a Driver Assistant Sys... more ABSTRACT This paper presents a fuzzy system for Adaptive Cruise Control as a Driver Assistant System in the intelligent vehicles. The main advantage of the suggested system is its simplicity. To test the designed system, a simulation tool is implemented. By using the described tool, we can analyze the operation of the implemented Adaptive Cruise Control system in a simulated highway. Results show an acceptable performance of the developed fuzzy system.

It is desirable that acoustic vectors would form separable clusters in the feature space; however... more It is desirable that acoustic vectors would form separable clusters in the feature space; however analysis of the common feature vectors does not support this assumption. This paper proposes a new method that manipulates the original features to produce a new feature set in which classes have more convex shape. The proposed methodology uses the idea that according to it, different features have unequal discrimination properties between speakers: So an automatic weighting function based on Multivariate Analysis Of Variance Algorithm (MANOVA) is proposed. MANOVA searches for a linear combination of original features with the largest separation among the speakers. The Vector Quantization (VQ) algorithm is used to detect speakers in the next stage. Although this algorithm is faster and has fewer complexes than the other classification algorithms in this content, promising results are achieved.

The automatic quantitative and qualitive analysis of traffic systems have an important role in co... more The automatic quantitative and qualitive analysis of traffic systems have an important role in control traffic movement. Computer vision based systems, using image processing algorithms, can extract moving objects from background in each moment and this is very useful for gathering traffic information. Thus, in these systems, adaptive background image that is updated according to each status can be used to extract other information and accuracy of extracted information is influence on this adaptive background image. This paper describes a system that can be used to extract adaptive background. Our used algorithm in this research is based on genetic algorithm to calculate combination parameter, adaptively. Combination parameter and another static parameter can be used to running weighted average between recently background extracted and some prior backgrounds according our proposed averaging formula. Result of this averaging is final extracted background on his time. This project is ...

Abstract: Problem statement: In many visions-based surveillance systems, the first step is accomp... more Abstract: Problem statement: In many visions-based surveillance systems, the first step is accomplished by detecting moving objects resulted from subtraction of the current captured frame from the extracted background. So, the results of these systems mainly depend on the accuracy of the background image. Approach: In this study, a proposed background extraction system is presented to model the background using a simple method, to initialize the model, to extract the moving objects and to construct the final background. Our model saves the history of each pixel separately. It uses the saved information to extract the background using a probability-based method. It updates the history of the pixel consequently and according to the value of that pixel in the current captured image. Results: Results of the experiments certify that not only the quality of the final extracted background is the best between four recently re-implemented methods, but also the time consumption of the extract...

Shadows appear in many scenes. Human can easily distinguish shadows from objects, but it is one o... more Shadows appear in many scenes. Human can easily distinguish shadows from objects, but it is one of the challenges for shadow detection intelligent automated systems. Accurate shadow detection can be difficult due to the illumination variations of the background and similarity between appearance of the objects and the background. Color and edge information are two popular features that have been used to distinguish cast shadows from objects. However, this become a problem when the difference of color information between object, shadow and background is poor, the edge of the shadow area is not clear and the shadow detection method is supposed to use only color or edge information method. In this article a shadow detection method using both color and edge information is presented. In order to improve the accuracy of shadow detection using color information, a new formula is used in the denominator of original c1 c2 c3. In addition using the hue difference of foreground and background i...
J. Intell. Fuzzy Syst., 2020

Local Binary Pattern (LBP) is invariant to the monotonic changes in the grey scale domain. This p... more Local Binary Pattern (LBP) is invariant to the monotonic changes in the grey scale domain. This property enables LBP to present a texture descriptor that can be useful in applications dealing with the local illumination changes. However, the existing versions of LBP are not able to handle image illumination changes, especially in outdoor environments. These non-patterned illumination changes disturb performance of the background extraction methods. In this paper, an extended version of LBP which is called Back Ground Local Binary Pattern (BGLBP) is presented. BGLBP is designed for the background extraction application but it is extendable to the other areas as a texture descriptor. BGLBP is an extension of Direction LBP (D-LBP), Centre-Symmetric LBP (CS-LBP), Uniform Local Binary Pattern (ULBP), and RIU-LBP (RotationInvariant Uniform LBP) and it has been designed to inherit the positive properties of previous versions. The performance of BGLBP as a part of background extraction meth...

Advances in mathematical finance & applications, 2017
There is little consensus on the corporate diversification-efficiency relationship in the diversi... more There is little consensus on the corporate diversification-efficiency relationship in the diversification literature. According to the corporate diversification, firms have a tendency to get more market share with diversifying in the local segment or in the international market. Theoretically, a contradictory exists between the profitable strategy and the value reducing strategy in the diversification strategy. In this paper, we measure firm's efficiency by applying Data Envelopment Analysis (DEA) in manufacturing firms listed in Bursa Malaysia for five years. Meanwhile, a feed forward multilayer perceptron neural network is applied to model the mapping function between the input and output data to the efficiency score. Back propagation (BP) learning algorithm is applied to update network's weights through minimizing the cost function, and the best topology of the network is conducted. The result of this study shows that there is a negative relationship between total product...
Uploads
Books by Seyed Hashem Davarpanah
Papers by Seyed Hashem Davarpanah