The controller is indispensable in software-defined networking (SDN). With several features, cont... more The controller is indispensable in software-defined networking (SDN). With several features, controllers monitor the network and respond promptly to dynamic changes. Their performance affects the quality-of-service (QoS) in SDN. Every controller supports a set of features. However, the support of the features may be more prominent in one controller. Moreover, a single controller leads to performance, single-point-of-failure (SPOF), and scalability problems. To overcome this, a controller with an optimum feature set must be available for SDN. Furthermore, a cluster of optimum feature set controllers will overcome an SPOF and improve the QoS in SDN. Herein, leveraging an analytical network process (ANP), we rank SDN controllers regarding their supporting features and create a hierarchical control plane based cluster (HCPC) of the highly ranked controller computed using the ANP, evaluating their performance for the OS3E topology. The results demonstrated in Mininet reveal that a HCPC environment with an optimum controller achieves an improved QoS. Moreover, the experimental results validated in Mininet show that our proposed approach surpasses the existing distributed controller clustering (DCC) schemes in terms of several performance metrics i.e., delay, jitter, throughput, load balancing, scalability and CPU (central processing unit) utilization.
Deployment of new optimized routing rules on routers are challenging, owing to the tight coupling... more Deployment of new optimized routing rules on routers are challenging, owing to the tight coupling of the data and control planes and a lack of global topological information. Due to the distributed nature of the traditional classical internet protocol networks, the routing rules and policies are disseminated in a decentralized manner, which causes looping issues during link failure. Software-defined networking (SDN) provides programmability to the network from a central point. Consequently, the nodes or data plane devices in SDN only forward packets and the complexity of the control plane is handed over to the controller. Therefore, the controller installs the rules and policies from a central location. Due to the central control, link failure identification and restoration becomes pliable because the controller has information about the global network topology. Similarly, new optimized rules for link recovery can be deployed from the central point. Herein, we review several schemes...
In this paper, we have compared the classification results of two
models i.e. Random Forest and t... more In this paper, we have compared the classification results of two models i.e. Random Forest and the J48 for classifying twenty versatile datasets. We took 20 data sets available from UCI repository [1] containing instances varying from 148 to 20000. We compared the classification results obtained from methods i.e. Random Forest and Decision Tree (J48). The classification parameters consist of correctly classified instances, incorrectly classified instances, F-Measure, Precision, Accuracy and Recall. We discussed the pros and cons of using these models for large and small data sets. The classification results show that Random Forest gives better results for the same number of attributes and large data sets i.e. with greater number of instances, while J48 is handy with small data sets (less number of instances). The results from breast cancer data set depicts that when the number of instances increased from 286 to 699, the percentage of correctly classified instances increased from 69.23% to 96.13% for Random Forest i.e. for dataset with same number of attributes but having more instances, the Random Forest accuracy increased.
In this paper, we have compared the classification results of two
models i.e. Random Forest and t... more In this paper, we have compared the classification results of two models i.e. Random Forest and the J48 for classifying twenty versatile datasets. We took 20 data sets available from UCI repository [1] containing instances varying from 148 to 20000. We compared the classification results obtained from methods i.e. Random Forest and Decision Tree (J48). The classification parameters consist of correctly classified instances, incorrectly classified instances, F-Measure, Precision, Accuracy and Recall. We discussed the pros and cons of using these models for large and small data sets. The classification results show that Random Forest gives better results for the same number of attributes and large data sets i.e. with greater number of instances, while J48 is handy with small data sets (less number of instances). The results from breast cancer data set depicts that when the number of instances increased from 286 to 699, the percentage of correctly classified instances increased from 69.23% to 96.13% for Random Forest i.e. for dataset with same number of attributes but having more instances, the Random Forest accuracy increased.
The controller is indispensable in software-defined networking (SDN). With several features, cont... more The controller is indispensable in software-defined networking (SDN). With several features, controllers monitor the network and respond promptly to dynamic changes. Their performance affects the quality-of-service (QoS) in SDN. Every controller supports a set of features. However, the support of the features may be more prominent in one controller. Moreover, a single controller leads to performance, single-point-of-failure (SPOF), and scalability problems. To overcome this, a controller with an optimum feature set must be available for SDN. Furthermore, a cluster of optimum feature set controllers will overcome an SPOF and improve the QoS in SDN. Herein, leveraging an analytical network process (ANP), we rank SDN controllers regarding their supporting features and create a hierarchical control plane based cluster (HCPC) of the highly ranked controller computed using the ANP, evaluating their performance for the OS3E topology. The results demonstrated in Mininet reveal that a HCPC environment with an optimum controller achieves an improved QoS. Moreover, the experimental results validated in Mininet show that our proposed approach surpasses the existing distributed controller clustering (DCC) schemes in terms of several performance metrics i.e., delay, jitter, throughput, load balancing, scalability and CPU (central processing unit) utilization.
Deployment of new optimized routing rules on routers are challenging, owing to the tight coupling... more Deployment of new optimized routing rules on routers are challenging, owing to the tight coupling of the data and control planes and a lack of global topological information. Due to the distributed nature of the traditional classical internet protocol networks, the routing rules and policies are disseminated in a decentralized manner, which causes looping issues during link failure. Software-defined networking (SDN) provides programmability to the network from a central point. Consequently, the nodes or data plane devices in SDN only forward packets and the complexity of the control plane is handed over to the controller. Therefore, the controller installs the rules and policies from a central location. Due to the central control, link failure identification and restoration becomes pliable because the controller has information about the global network topology. Similarly, new optimized rules for link recovery can be deployed from the central point. Herein, we review several schemes...
In this paper, we have compared the classification results of two
models i.e. Random Forest and t... more In this paper, we have compared the classification results of two models i.e. Random Forest and the J48 for classifying twenty versatile datasets. We took 20 data sets available from UCI repository [1] containing instances varying from 148 to 20000. We compared the classification results obtained from methods i.e. Random Forest and Decision Tree (J48). The classification parameters consist of correctly classified instances, incorrectly classified instances, F-Measure, Precision, Accuracy and Recall. We discussed the pros and cons of using these models for large and small data sets. The classification results show that Random Forest gives better results for the same number of attributes and large data sets i.e. with greater number of instances, while J48 is handy with small data sets (less number of instances). The results from breast cancer data set depicts that when the number of instances increased from 286 to 699, the percentage of correctly classified instances increased from 69.23% to 96.13% for Random Forest i.e. for dataset with same number of attributes but having more instances, the Random Forest accuracy increased.
In this paper, we have compared the classification results of two
models i.e. Random Forest and t... more In this paper, we have compared the classification results of two models i.e. Random Forest and the J48 for classifying twenty versatile datasets. We took 20 data sets available from UCI repository [1] containing instances varying from 148 to 20000. We compared the classification results obtained from methods i.e. Random Forest and Decision Tree (J48). The classification parameters consist of correctly classified instances, incorrectly classified instances, F-Measure, Precision, Accuracy and Recall. We discussed the pros and cons of using these models for large and small data sets. The classification results show that Random Forest gives better results for the same number of attributes and large data sets i.e. with greater number of instances, while J48 is handy with small data sets (less number of instances). The results from breast cancer data set depicts that when the number of instances increased from 286 to 699, the percentage of correctly classified instances increased from 69.23% to 96.13% for Random Forest i.e. for dataset with same number of attributes but having more instances, the Random Forest accuracy increased.
Uploads
Papers by jehad ali
models i.e. Random Forest and the J48 for classifying twenty
versatile datasets. We took 20 data sets available from UCI
repository [1] containing instances varying from 148 to 20000.
We compared the classification results obtained from methods i.e.
Random Forest and Decision Tree (J48). The classification
parameters consist of correctly classified instances, incorrectly
classified instances, F-Measure, Precision, Accuracy and Recall.
We discussed the pros and cons of using these models for large
and small data sets. The classification results show that Random
Forest gives better results for the same number of attributes and
large data sets i.e. with greater number of instances, while J48 is
handy with small data sets (less number of instances). The results
from breast cancer data set depicts that when the number of
instances increased from 286 to 699, the percentage of correctly
classified instances increased from 69.23% to 96.13% for
Random Forest i.e. for dataset with same number of attributes but
having more instances, the Random Forest accuracy increased.
models i.e. Random Forest and the J48 for classifying twenty
versatile datasets. We took 20 data sets available from UCI
repository [1] containing instances varying from 148 to 20000.
We compared the classification results obtained from methods i.e.
Random Forest and Decision Tree (J48). The classification
parameters consist of correctly classified instances, incorrectly
classified instances, F-Measure, Precision, Accuracy and Recall.
We discussed the pros and cons of using these models for large
and small data sets. The classification results show that Random
Forest gives better results for the same number of attributes and
large data sets i.e. with greater number of instances, while J48 is
handy with small data sets (less number of instances). The results
from breast cancer data set depicts that when the number of
instances increased from 286 to 699, the percentage of correctly
classified instances increased from 69.23% to 96.13% for
Random Forest i.e. for dataset with same number of attributes but
having more instances, the Random Forest accuracy increased.
models i.e. Random Forest and the J48 for classifying twenty
versatile datasets. We took 20 data sets available from UCI
repository [1] containing instances varying from 148 to 20000.
We compared the classification results obtained from methods i.e.
Random Forest and Decision Tree (J48). The classification
parameters consist of correctly classified instances, incorrectly
classified instances, F-Measure, Precision, Accuracy and Recall.
We discussed the pros and cons of using these models for large
and small data sets. The classification results show that Random
Forest gives better results for the same number of attributes and
large data sets i.e. with greater number of instances, while J48 is
handy with small data sets (less number of instances). The results
from breast cancer data set depicts that when the number of
instances increased from 286 to 699, the percentage of correctly
classified instances increased from 69.23% to 96.13% for
Random Forest i.e. for dataset with same number of attributes but
having more instances, the Random Forest accuracy increased.
models i.e. Random Forest and the J48 for classifying twenty
versatile datasets. We took 20 data sets available from UCI
repository [1] containing instances varying from 148 to 20000.
We compared the classification results obtained from methods i.e.
Random Forest and Decision Tree (J48). The classification
parameters consist of correctly classified instances, incorrectly
classified instances, F-Measure, Precision, Accuracy and Recall.
We discussed the pros and cons of using these models for large
and small data sets. The classification results show that Random
Forest gives better results for the same number of attributes and
large data sets i.e. with greater number of instances, while J48 is
handy with small data sets (less number of instances). The results
from breast cancer data set depicts that when the number of
instances increased from 286 to 699, the percentage of correctly
classified instances increased from 69.23% to 96.13% for
Random Forest i.e. for dataset with same number of attributes but
having more instances, the Random Forest accuracy increased.