The upcoming quantum era is believed to be an end for the elliptic curve digital signature algori... more The upcoming quantum era is believed to be an end for the elliptic curve digital signature algorithm (ECDSA) and other number-theoretic digital signature schemes. Hence, the technologies which incorporate ECDSA would be at risk once quantum computers are available at large scale. Distributed ledger technology is one of the potential victims of powerful quantum computers. Fortunately, post-quantum digital signature schemes are already available. Hash-based digital signatures (HBS) schemes due to their simplicity and efficiency have gained tremendous attention from the research community. However, large size of key and signature are the major drawbacks of HBS schemes. This paper proposes a compact and efficient HBS scheme "Smart Digital Signatures" (SDS), which is closer to an existing popular HBS scheme, XMSS. SDS incorporates a novel one-time signature (OTS) scheme in XMSS, namely SDS-OTS. Furthermore, SDS uses a slightly modified version of the key compression tree as compared to XMSS. We have compared SDS with XMSS-WOTS and XMSS-WOTS +. The results reveal a significant reduction in hash tree construction time compared to XMSS, and key and signature sizes compared to WOTS and WOTS +. Finally, we have also proposed a model for incorporating SDS into a distributed ledger, with the help of High-Level Petri-nets.
Processing large amounts of data in real time for identifying security issues pose several perfor... more Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.
Despite the surge in a vehicular ad-hoc network (VANET) and volunteer computing research, future ... more Despite the surge in a vehicular ad-hoc network (VANET) and volunteer computing research, future high-end vehicles are expected to under-utilize the onboard computation, storage and communication resources. Therefore, this research envisions the next paradigm shift by merging VANET and volunteer computing, which we call VANET based volunteer computing (or VBVC). To date, the potential design system for VBVC has not been characterized. To fill up this gap, we first set forth the scientific classification of VBVC, which uses the automobiles alongside roadside units (RSU) to give computational administrations to different vehicles on the road. We propose a potential framework for different VBVC scenarios. Moreover, we provide an experimental evaluation of VBVC by comparing it with the traditional model in terms of job completion, latency, and throughput. The proposed VBVC performs better when compared with traditional approaches.
In data publishing, privacy and utility are essential for data owners and users respectively, whi... more In data publishing, privacy and utility are essential for data owners and users respectively, which cannot coexist well. This incompatibility puts the data privacy researchers under an obligation to find newer and reliable privacy preserving tradeoff-techniques. Data providers like many public and private organizations (e.g. hospitals and banks) publish microdata of individuals for various research purposes. Publishing microdata may compromise the privacy of individuals. To prevent the privacy of individuals, data must be published after removing personal identifiers like name and social security numbers. Removal of the personal identifiers appears as not enough to protect the privacy of individuals. K-anonymity model is used to publish microdata by preserving the individual's privacy through generalization. There exist many state-of-the-arts generalization-based techniques, which deal with pre-defined attacks like background knowledge attack, similarity attack, probability attack and so on. However, existing generalization-based techniques compromise the data utility while ensuring privacy. It is an open question to find an efficient technique that is able to set a trade-off between privacy and utility. In this paper, we discussed existing generalization hierarchies and their limitations in detail. We have also proposed three new generalization techniques including conventional generalization hierarchies, divisors based generalization hierarchies and cardinality-based generalization hierarchies. Extensive experiments on the real-world dataset acknowledge that our technique outperforms among the existing techniques in terms of better utility.
Malformation is one of the most destructive mango diseases. Although trees are not killed, the ve... more Malformation is one of the most destructive mango diseases. Although trees are not killed, the vegetative phase of the disease impedes canopy development during vegetative phase of the host plant and floral phase dramatically reduces fruit yield with overwintering inoculums during dormant phase of the host plant. Environmental conditions and trend of spore liberation of its pathogenic fungus “Fusarium mangiferae” were recorded during flowering phase (Feb-April, 2014), fruit development phase (May-July, 2014), vegetative phase (Aug-Oct, 2014), and dormant phase (Nov-Jan, 2014-15), of the mango plants. Through installation of spore traps of various distance levels containing Nash-Synder media in petri plates. During these phases, different environmental variables including temperature (T), relative humidity (R.H) and wind speed (W.S) were observed. Maximum number of colonies were observed through the spores trapped from the centre of the experimental block (0m) while minimum numbers o...
Computation offloading is a process that provides computing services to vehicles with computation... more Computation offloading is a process that provides computing services to vehicles with computation sensitive jobs. Volunteer Computing-Based Vehicular Ad-hoc Networking (VCBV) is envisioned as a promising solution to perform task executions in vehicular networks using an emerging concept known as vehicle-as-a-resource (VaaR). In VCBV systems, offloading is the primary technique used for the execution of delay-sensitive applications which rely on surplus resource utilization. To leverage the surplus resources arising in periods of traffic congestion, we propose a hybrid VCBV task coordination model which performs the resource utilization for task execution in a multi-hop fashion. We propose an algorithm for the determination of boundary relay vehicles to minimize the requirement of placement for multiple road-side units (RSUs). We propose algorithms for primary and secondary task coordination using hybrid VCBV. Extensive simulations show that the hybrid technique for task coordination...
Trustworthy data" is the fuel for ensuring transparent traceability, precise decision-making, and... more Trustworthy data" is the fuel for ensuring transparent traceability, precise decision-making, and cogent coordination in the Supply Chain (SC) space. However, the disparate data silos act as a trade barrier in orchestrating the provenance of the product lifecycle; starting from the raw materials to end products available for customers. Besides product traceability, the legacy SCs face several other problems including, data validation, data accessibility, security, and privacy issues. Blockchain as one of the advanced Distributed Ledger Technology (DLT) addresses these challenges by linking fragmented and siloed SC events in an immutable audit trail. Nevertheless, the challenging constraints of blockchain, including, but not limited to, scalability, inability to access off-line data, vulnerability to quantum attacks, and high transaction fees necessitate a new mechanism to overcome the inefficiencies of the current blockchain design. In this regard, IOTA (the third generation of DLT) leverages a Directed Acyclic Graph (DAG)-based data structure in contrast to linear data structure of blockchain to bridge such gaps and facilitate a scalable, quantum-resistant, and miner-free solution for the Internet of Things (IoT) in addition to the significant features provided by the blockchain. Realizing the crucial requirement of traceability and considering the limitations of blockchain in SC, we propose a provenance-enabled framework for the Electronics Supply Chain (ESC) through a permissioned IOTA ledger. To identify operational disruptions or counterfeiting issues, we devise a transparent product ledger constructed based on trade event details along with timestamped sensory data to track and trace the complete product journey at each intermediary step throughout the SC processes. We further exploit the Masked Authenticated Messaging (MAM) protocol provided by IOTA that allows the SC players to procure distributed information while keeping confidential trade flows, ensuring restrictions on data retrieval, and facilitating the integration of fine-grained or coarse-grained data accessibility. Our experimental results show that the time required to construct secure provenance data aggregated from multiple SC entities takes on average
With advances in Fog and edge computing, various problems such as data processing for large Inter... more With advances in Fog and edge computing, various problems such as data processing for large Internet of things (IoT) systems can be solved in an efficient manner. One such problem for the next generation smart grid IoT system comprising of millions of smart devices is the data aggregation problem. Traditional data aggregation schemes for smart grids incur high computation and communication costs, and in recent years there have been efforts to leverage fog computing with smart grids to overcome these limitations. In this paper, a new fog-enabled privacy-preserving data aggregation scheme (FESDA) is proposed. Unlike existing schemes, the proposed scheme is resilient to false data injection attacks by filtering out the inserted values from external attackers. To achieve privacy, a modified version of Paillier crypto-system is used to encrypt consumption data of the smart meter users. In addition, FESDA is fault-tolerant, which means, the collection of data from other devices will not be affected even if some of the smart meters malfunction. We evaluate its performance along with three other competing schemes in terms of aggregation, decryption and communication costs. The findings demonstrate that FESDA reduces the communication cost by 50%, when compared with the PPFA aggregation scheme.
The Internet of Things (IoT) is an exponentially growing emerging technology, which is implemente... more The Internet of Things (IoT) is an exponentially growing emerging technology, which is implemented in the digitization of Electronic Health Records (EHR). The application of IoT is used to collect the patient’s data and the data holders and then to publish these data. However, the data collected through the IoT-based devices are vulnerable to information leakage and are a potential privacy threat. Therefore, there is a need to implement privacy protection methods to prevent individual record identification in EHR. Significant research contributions exist e.g., p+-sensitive k-anonymity and balanced p+-sensitive k-anonymity for implementing privacy protection in EHR. However, these models have certain privacy vulnerabilities, which are identified in this paper with two new types of attack: the sensitive variance attack and categorical similarity attack. A mitigation solution, the θ -sensitive k-anonymity privacy model, is proposed to prevent the mentioned attacks. The proposed model w...
In the modern world of digitalization, data growth, aggregation and sharing have escalated drasti... more In the modern world of digitalization, data growth, aggregation and sharing have escalated drastically. Users share huge amounts of data due to the widespread adoption of Internet-of-things (IoT) and cloud-based smart devices. Such data could have confidential attributes about various individuals. Therefore, privacy preservation has become an important concern. Many privacy-preserving data publication models have been proposed to ensure data sharing without privacy disclosures. However, publishing high-dimensional data with sufficient privacy is still a challenging task and very little focus has been given to propound optimal privacy solutions for high-dimensional data. In this paper, we propose a novel privacy-preserving model to anonymize high-dimensional data (prone to various privacy attacks including probabilistic, skewness, and gender-specific). Our proposed model is a combination of l-diversity along with constrained slicing and vertical division. The proposed model can prote...
In this paper, GP based intelligent scheme has been used to develop an Optimal Composite Classifi... more In this paper, GP based intelligent scheme has been used to develop an Optimal Composite Classifier (OCC) from individual nearest neighbor (NN) classifiers. In the combining scheme, first, the predicted information is extracted from the component classifiers. Then, GP is used to develop OCC having better performance than individual NN classifiers. The experimental results demonstrate that the combined decision space of OCC is more effective. Further, we observed that heterogeneous combination of classifiers has more promising results than their homogenous one. Another side advantage of our GP based intelligent combination scheme is that it automatically incorporates the issues of optimal model selection of NN classifiers to achieve a higher performance prediction model.
The upcoming quantum era is believed to be an end for the elliptic curve digital signature algori... more The upcoming quantum era is believed to be an end for the elliptic curve digital signature algorithm (ECDSA) and other number-theoretic digital signature schemes. Hence, the technologies which incorporate ECDSA would be at risk once quantum computers are available at large scale. Distributed ledger technology is one of the potential victims of powerful quantum computers. Fortunately, post-quantum digital signature schemes are already available. Hash-based digital signatures (HBS) schemes due to their simplicity and efficiency have gained tremendous attention from the research community. However, large size of key and signature are the major drawbacks of HBS schemes. This paper proposes a compact and efficient HBS scheme "Smart Digital Signatures" (SDS), which is closer to an existing popular HBS scheme, XMSS. SDS incorporates a novel one-time signature (OTS) scheme in XMSS, namely SDS-OTS. Furthermore, SDS uses a slightly modified version of the key compression tree as compared to XMSS. We have compared SDS with XMSS-WOTS and XMSS-WOTS +. The results reveal a significant reduction in hash tree construction time compared to XMSS, and key and signature sizes compared to WOTS and WOTS +. Finally, we have also proposed a model for incorporating SDS into a distributed ledger, with the help of High-Level Petri-nets.
Processing large amounts of data in real time for identifying security issues pose several perfor... more Processing large amounts of data in real time for identifying security issues pose several performance challenges, especially when hardware infrastructure is limited. Managed Security Service Providers (MSSP), mostly hosting their applications on the Cloud, receive events at a very high rate that varies from a few hundred to a couple of thousand events per second (EPS). It is critical to process this data efficiently, so that attacks could be identified quickly and necessary response could be initiated. This paper evaluates the performance of a security framework OSTROM built on the Esper complex event processing (CEP) engine under a parallel and non-parallel computational framework. We explain three architectures under which Esper can be used to process events. We investigated the effect on throughput, memory and CPU usage in each configuration setting. The results indicate that the performance of the engine is limited by the number of events coming in rather than the queries being processed. The architecture where 1/4th of the total events are submitted to each instance and all the queries are processed by all the units shows best results in terms of throughput, memory and CPU usage.
Despite the surge in a vehicular ad-hoc network (VANET) and volunteer computing research, future ... more Despite the surge in a vehicular ad-hoc network (VANET) and volunteer computing research, future high-end vehicles are expected to under-utilize the onboard computation, storage and communication resources. Therefore, this research envisions the next paradigm shift by merging VANET and volunteer computing, which we call VANET based volunteer computing (or VBVC). To date, the potential design system for VBVC has not been characterized. To fill up this gap, we first set forth the scientific classification of VBVC, which uses the automobiles alongside roadside units (RSU) to give computational administrations to different vehicles on the road. We propose a potential framework for different VBVC scenarios. Moreover, we provide an experimental evaluation of VBVC by comparing it with the traditional model in terms of job completion, latency, and throughput. The proposed VBVC performs better when compared with traditional approaches.
In data publishing, privacy and utility are essential for data owners and users respectively, whi... more In data publishing, privacy and utility are essential for data owners and users respectively, which cannot coexist well. This incompatibility puts the data privacy researchers under an obligation to find newer and reliable privacy preserving tradeoff-techniques. Data providers like many public and private organizations (e.g. hospitals and banks) publish microdata of individuals for various research purposes. Publishing microdata may compromise the privacy of individuals. To prevent the privacy of individuals, data must be published after removing personal identifiers like name and social security numbers. Removal of the personal identifiers appears as not enough to protect the privacy of individuals. K-anonymity model is used to publish microdata by preserving the individual's privacy through generalization. There exist many state-of-the-arts generalization-based techniques, which deal with pre-defined attacks like background knowledge attack, similarity attack, probability attack and so on. However, existing generalization-based techniques compromise the data utility while ensuring privacy. It is an open question to find an efficient technique that is able to set a trade-off between privacy and utility. In this paper, we discussed existing generalization hierarchies and their limitations in detail. We have also proposed three new generalization techniques including conventional generalization hierarchies, divisors based generalization hierarchies and cardinality-based generalization hierarchies. Extensive experiments on the real-world dataset acknowledge that our technique outperforms among the existing techniques in terms of better utility.
Malformation is one of the most destructive mango diseases. Although trees are not killed, the ve... more Malformation is one of the most destructive mango diseases. Although trees are not killed, the vegetative phase of the disease impedes canopy development during vegetative phase of the host plant and floral phase dramatically reduces fruit yield with overwintering inoculums during dormant phase of the host plant. Environmental conditions and trend of spore liberation of its pathogenic fungus “Fusarium mangiferae” were recorded during flowering phase (Feb-April, 2014), fruit development phase (May-July, 2014), vegetative phase (Aug-Oct, 2014), and dormant phase (Nov-Jan, 2014-15), of the mango plants. Through installation of spore traps of various distance levels containing Nash-Synder media in petri plates. During these phases, different environmental variables including temperature (T), relative humidity (R.H) and wind speed (W.S) were observed. Maximum number of colonies were observed through the spores trapped from the centre of the experimental block (0m) while minimum numbers o...
Computation offloading is a process that provides computing services to vehicles with computation... more Computation offloading is a process that provides computing services to vehicles with computation sensitive jobs. Volunteer Computing-Based Vehicular Ad-hoc Networking (VCBV) is envisioned as a promising solution to perform task executions in vehicular networks using an emerging concept known as vehicle-as-a-resource (VaaR). In VCBV systems, offloading is the primary technique used for the execution of delay-sensitive applications which rely on surplus resource utilization. To leverage the surplus resources arising in periods of traffic congestion, we propose a hybrid VCBV task coordination model which performs the resource utilization for task execution in a multi-hop fashion. We propose an algorithm for the determination of boundary relay vehicles to minimize the requirement of placement for multiple road-side units (RSUs). We propose algorithms for primary and secondary task coordination using hybrid VCBV. Extensive simulations show that the hybrid technique for task coordination...
Trustworthy data" is the fuel for ensuring transparent traceability, precise decision-making, and... more Trustworthy data" is the fuel for ensuring transparent traceability, precise decision-making, and cogent coordination in the Supply Chain (SC) space. However, the disparate data silos act as a trade barrier in orchestrating the provenance of the product lifecycle; starting from the raw materials to end products available for customers. Besides product traceability, the legacy SCs face several other problems including, data validation, data accessibility, security, and privacy issues. Blockchain as one of the advanced Distributed Ledger Technology (DLT) addresses these challenges by linking fragmented and siloed SC events in an immutable audit trail. Nevertheless, the challenging constraints of blockchain, including, but not limited to, scalability, inability to access off-line data, vulnerability to quantum attacks, and high transaction fees necessitate a new mechanism to overcome the inefficiencies of the current blockchain design. In this regard, IOTA (the third generation of DLT) leverages a Directed Acyclic Graph (DAG)-based data structure in contrast to linear data structure of blockchain to bridge such gaps and facilitate a scalable, quantum-resistant, and miner-free solution for the Internet of Things (IoT) in addition to the significant features provided by the blockchain. Realizing the crucial requirement of traceability and considering the limitations of blockchain in SC, we propose a provenance-enabled framework for the Electronics Supply Chain (ESC) through a permissioned IOTA ledger. To identify operational disruptions or counterfeiting issues, we devise a transparent product ledger constructed based on trade event details along with timestamped sensory data to track and trace the complete product journey at each intermediary step throughout the SC processes. We further exploit the Masked Authenticated Messaging (MAM) protocol provided by IOTA that allows the SC players to procure distributed information while keeping confidential trade flows, ensuring restrictions on data retrieval, and facilitating the integration of fine-grained or coarse-grained data accessibility. Our experimental results show that the time required to construct secure provenance data aggregated from multiple SC entities takes on average
With advances in Fog and edge computing, various problems such as data processing for large Inter... more With advances in Fog and edge computing, various problems such as data processing for large Internet of things (IoT) systems can be solved in an efficient manner. One such problem for the next generation smart grid IoT system comprising of millions of smart devices is the data aggregation problem. Traditional data aggregation schemes for smart grids incur high computation and communication costs, and in recent years there have been efforts to leverage fog computing with smart grids to overcome these limitations. In this paper, a new fog-enabled privacy-preserving data aggregation scheme (FESDA) is proposed. Unlike existing schemes, the proposed scheme is resilient to false data injection attacks by filtering out the inserted values from external attackers. To achieve privacy, a modified version of Paillier crypto-system is used to encrypt consumption data of the smart meter users. In addition, FESDA is fault-tolerant, which means, the collection of data from other devices will not be affected even if some of the smart meters malfunction. We evaluate its performance along with three other competing schemes in terms of aggregation, decryption and communication costs. The findings demonstrate that FESDA reduces the communication cost by 50%, when compared with the PPFA aggregation scheme.
The Internet of Things (IoT) is an exponentially growing emerging technology, which is implemente... more The Internet of Things (IoT) is an exponentially growing emerging technology, which is implemented in the digitization of Electronic Health Records (EHR). The application of IoT is used to collect the patient’s data and the data holders and then to publish these data. However, the data collected through the IoT-based devices are vulnerable to information leakage and are a potential privacy threat. Therefore, there is a need to implement privacy protection methods to prevent individual record identification in EHR. Significant research contributions exist e.g., p+-sensitive k-anonymity and balanced p+-sensitive k-anonymity for implementing privacy protection in EHR. However, these models have certain privacy vulnerabilities, which are identified in this paper with two new types of attack: the sensitive variance attack and categorical similarity attack. A mitigation solution, the θ -sensitive k-anonymity privacy model, is proposed to prevent the mentioned attacks. The proposed model w...
In the modern world of digitalization, data growth, aggregation and sharing have escalated drasti... more In the modern world of digitalization, data growth, aggregation and sharing have escalated drastically. Users share huge amounts of data due to the widespread adoption of Internet-of-things (IoT) and cloud-based smart devices. Such data could have confidential attributes about various individuals. Therefore, privacy preservation has become an important concern. Many privacy-preserving data publication models have been proposed to ensure data sharing without privacy disclosures. However, publishing high-dimensional data with sufficient privacy is still a challenging task and very little focus has been given to propound optimal privacy solutions for high-dimensional data. In this paper, we propose a novel privacy-preserving model to anonymize high-dimensional data (prone to various privacy attacks including probabilistic, skewness, and gender-specific). Our proposed model is a combination of l-diversity along with constrained slicing and vertical division. The proposed model can prote...
In this paper, GP based intelligent scheme has been used to develop an Optimal Composite Classifi... more In this paper, GP based intelligent scheme has been used to develop an Optimal Composite Classifier (OCC) from individual nearest neighbor (NN) classifiers. In the combining scheme, first, the predicted information is extracted from the component classifiers. Then, GP is used to develop OCC having better performance than individual NN classifiers. The experimental results demonstrate that the combined decision space of OCC is more effective. Further, we observed that heterogeneous combination of classifiers has more promising results than their homogenous one. Another side advantage of our GP based intelligent combination scheme is that it automatically incorporates the issues of optimal model selection of NN classifiers to achieve a higher performance prediction model.
Uploads
Papers by DR ABID KHAN