Papers by Tareque Chowdhury

International journal of computer applications, Aug 23, 2013
The rapid development of bioinformatics has resulted in the explosion of DNA sequence data which ... more The rapid development of bioinformatics has resulted in the explosion of DNA sequence data which is characterized by large number of items. Studies have shown that biological functions are dictated by contagious portions of the DNA sequence. Finding contiguous frequent patterns from long data sequences such as DNA sequences is a particularly challenging task and can pave the way towards new breakthroughs. Apriori based techniques were among the first to be used in frequent contagious pattern mining. Later improved approaches like GSP, Prefix Span were also applied but the approaches required either large number of sequence scans, generated large number of candidates or required higher number of intermediate sequential patterns. In this paper an improvement of the positional based approach for contagious frequent pattern mining is DNA sequences is proposed. The proposed algorithm improves the existing positional based approach by introducing a new amalgamated sorting and joining technique which helps to reduce time and space complexity. The proposed approach outperforms traditional existing contiguous frequent mining approaches.

Scientific Data
Digital radiography is one of the most common and cost-effective standards for the diagnosis of b... more Digital radiography is one of the most common and cost-effective standards for the diagnosis of bone fractures. For such diagnoses expert intervention is required which is time-consuming and demands rigorous training. With the recent growth of computer vision algorithms, there is a surge of interest in computer-aided diagnosis. The development of algorithms demands large datasets with proper annotations. Existing X-Ray datasets are either small or lack proper annotation, which hinders the development of machine-learning algorithms and evaluation of the relative performance of algorithms for classification, localization, and segmentation. We present FracAtlas, a new dataset of X-Ray scans curated from the images collected from 3 major hospitals in Bangladesh. Our dataset includes 4,083 images that have been manually annotated for bone fracture classification, localization, and segmentation with the help of 2 expert radiologists and an orthopedist using the open-source labeling platfo...
An Evaluation of Transformer-Based Models in Personal Health Mention Detection
2022 25th International Conference on Computer and Information Technology (ICCIT)

ArXiv, 2018
Reconstruction of gene regulatory networks is the process of identifying gene dependency from gen... more Reconstruction of gene regulatory networks is the process of identifying gene dependency from gene expression profile through some computation techniques. In our human body, though all cells pose similar genetic material but the activation state may vary. This variation in the activation of genes helps researchers to understand more about the function of the cells. Researchers get insight about diseases like mental illness, infectious disease, cancer disease and heart disease from microarray technology, etc. In this study, a cancer-specific gene regulatory network has been constructed using a simple and novel machine learning approach. In First Step, linear regression algorithm provided us the significant genes those expressed themselves differently. Next, regulatory relationships between the identified genes has been computed using Pearson correlation coefficient. Finally, the obtained results have been validated with the available databases and literatures. We can identify the hub...
Computer vision vs human perception: Novel preprocessing technique to reduce inter-character similaity of Bangla alphabet
2013 International Conference on Informatics, Electronics and Vision (ICIEV), 2013
In Bangla alphabet, most of the characters share same features and such similarity misleads a rec... more In Bangla alphabet, most of the characters share same features and such similarity misleads a recognizer as it makes decision based on measures of absolute difference. A preprocessing step called `scrambling' and modification of existing grouping scheme are proposed to decrease the intercharacter similarity among Bangla characters. The theory behind this proposal is originally inspired by experiments done on human vision and perception by psychologists and vision scientists. Experimental results corroborate theoretical predictions which lead us to propose scrambling and new grouping technique as effective steps in character recognition.
International Journal of Computer Applications, 2012
Most of the known encoding schemes for Bengali language have a common drawback. That is character... more Most of the known encoding schemes for Bengali language have a common drawback. That is characters order in the encoding scheme is different than the linguistic order. As a result, sorting of Bengali texts as per encoded value does not sort them in correct linguistic order. Even if Bengali characters are encoded in linguistic order, because of special properties of Bengali conjunct character, Bengali text can not be sorted directly using only traditional sorting algorithms. In this paper we proposed an encoding scheme for Bengali script which supports sorting of texts by sorting them as per encoded value. Thus the new encoding scheme can save significant amount of processing time for sort operations over large volume of Bengali texts. General Terms Natural Language Processing
Transfer Learning-based Ensemble Approach for Organ Classification: An Empirical Study
2022 12th International Conference on Electrical and Computer Engineering (ICECE)

A Novel Approach to Classify Electrocardiogram Signals Using Deep Neural Networks
2020 2nd International Conference on Computer and Information Sciences (ICCIS)
Atrial fibrillation (AF) is an unusual heart rhythm condition which is caused by sporadic firing ... more Atrial fibrillation (AF) is an unusual heart rhythm condition which is caused by sporadic firing of electrical impulses from multiple places in the atria. AF is considered to be one of the leading cause of stroke in the world. AF is usually screened manually with the help of Electrocardiodiagram (ECG) reading. The manual process of reading ECG is a tedious and time-consuming task which is laden with human errors. Therefore, an automated process is quintessential. However, discerning anomaly in heart function using an efficient automated process has been a challenging task for quite some time. In this paper, the authors propose an intricate Neural Network architecture, CNN+LSTM, for the classification amongst four types of heart condition-Normal, Atrial Fibrillation, Noisy Sinus Rhythm and Alternative Rhythms using a dataset from PhysioNet/2017challenge. Volunteers in PhysioNet/2017 challenge dataset came from diverse backgrounds and had a wide window of variation in their physical attributes, making the dataset sufficiently reliable. In addition, the number of samples in this dataset exceeded any other before it on this topic, which further adds to the comprehensiveness of this dataset. The authors' method reached a summit accuracy of 91.19% using the proposed model.
Uploads
Papers by Tareque Chowdhury