0% found this document useful (0 votes)
15 views36 pages

Sss Code Output

The document presents performance metrics for various deep learning and machine learning models, including CNN, LSTM, and pre-trained models like Electra and BERT, across different data handling techniques such as split, merge, and augmentation. It also includes results for machine learning models using tf-idf and ensemble methods, highlighting accuracy scores for each model configuration. The document serves as a comparative analysis of model effectiveness in processing and analyzing data.

Uploaded by

devnathrimon78
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views36 pages

Sss Code Output

The document presents performance metrics for various deep learning and machine learning models, including CNN, LSTM, and pre-trained models like Electra and BERT, across different data handling techniques such as split, merge, and augmentation. It also includes results for machine learning models using tf-idf and ensemble methods, highlighting accuracy scores for each model configuration. The document serves as a comparative analysis of model effectiveness in processing and analyzing data.

Uploaded by

devnathrimon78
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Deep learning

Augment Augment
Model Split Merge W2V split Aug W2V
split merge
CNN 78.23 77.52 86.93 91.00 73.31 72.96
LSTM 72.71 73.58 85.23 88.57 64.10 64.45
BILSTM 74.87 75.20 85.20 88.65 65.99 66.08
ANN 74.23 73.22 84.73 87.64 / /

Pretrain model
Model Split acc Merge Acc Merge Augment Acc
Electra 74.78 84.46 89.25
*Distil Bert 75.56 84.66 89.47
BERT 76.62 85.66 88.97

1
Machine learning model: tf-idf
Model name Split Acc Aug Split Acc merge Acc Aug merge Acc
XGBoost 68.10 69.14 75.29 76.75
Random Forest 70.63 71.23 77.51 80.87
AdaBoost 63.32 63.00 65.61 65.48
Gradient Boosting 67.78 68.15 74.52 74.32
Decision Tree 65.16 65.30 73.29 77.41
Logistic Regression 68.93 67.69 67.04 65.71
K-Nearest Neighbors 51.03 54.73 62.08 68.70
Naive Bayes 65.48 65.14 61.52 64.66
SVM 69.44 67.99 66.61 66.67
LightGBM 65.11 67.11 75.95 69.37
Bagged Decision
67.78 68.29 75.78 80.04
Tree

ML Ensemble
Model Merge data Acc Aug Merge data Acc
XGB+RF+GB 79.44 82.79
RF+SVM +ADA 81.01 84.23
RF + ADA + XGB 79.58 83.68
KNN+SVM+MNB 74.92 79.96
RF+SVM+GBM 79.98 83.13
SVM+MNB+DT 76.48 81.47
RF+ XGB + DT 77.98 82.40
RF+ GB + DT 77.02 82.19
ADA + RF + DT 74.89 80.12
ADA + GB + DT 74.79 80.12
DT + SVM + ADA 74.62 80.19

2
ML hybrid:
Model Merge data Acc Aug Merge data Acc
Ensemble(XGB+RF+GB) +
96.50 95.46
Decision tree
Ensemble(RF,XGB,GB) +
96.50 95.46
RandomForest
ensemble(XGB+RF+GB) +
91.02 88.31
XGBoost
ensemble(XGB+RF+GB) +
94.54 93.86
Bagged Decision Tree
ensemble(XGB+RF+GB) +
96.50 95.46
adaboost
ensemble(XGB+RF+GB) +
79.01 82.74
SVM
ensemble(XGB+RF+GB) +
81.80 84.53
KNN
ensemble(XGB+RF+GB) +
73.69 75.93
Naive Bayes

3
Train
Data
Concated
with ocde

Stopwards
Combine/ Augment
Test Data Punctuation Autotokenizer
Merge dataset
String

MODEL
Val Data

4
[Link] Split dataset
78.23

5
[Link] augmentt Split dataset
77.52

6
[Link] merge dataset
86.93

7
[Link] merge_augment dataset
91.00

8
[Link] split w2v dataset
73.31

9
[Link] augment split w2v dataset
72.96

10
[Link] Split dataset
72.71

11
[Link] augment Split dataset
73.58

12
[Link] merge dataset
85.23

13
[Link] augment merge dataset
88.57

14
[Link] split w2v dataset
64.10

15
[Link] augment split w2v dataset
64.45

16
[Link]-LSTM Split dataset
74.87

17
[Link]-LSTM augment Split dataset
75.20

18
[Link]-LSTM merge dataset
85.20

19
[Link] augment merge dataset
88.65

20
86.84

21
[Link] split w2v dataset
65.99

22
[Link] aug split w2v dataset
66.08

23
[Link] Split dataset
74.23

24
[Link] augment Split dataset
73.22

25
[Link] merge dataset
84.73

26
[Link] augment merge dataset
87.64

27
Electra Split Accuracy

Electra merge Accuracy

28
Electra Aug Accuracy

29
30
Distil bert split Accuracy

31
Distil bert Accuracy

32
Distil bert Augment Accuracy

33
Bert Split Accuracy

34
Bert merge Accuracy

35
Bert Augment merge Accuracy

36

You might also like