Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012
The ever growing technology has resulted in the need for storing and processing excessively large amounts of data on cloud. The current volume of data is enormous and is expected to replicate over 650 times by the year 2014, out of which, 85% would be unstructured. This is known as the ‘Big Data’ problem. The techniques of Hadoop, an efficient resource scheduling method and a probabilistic redundant scheduling, are presented for the system to efficiently organize "free" computer storage resources existing within enterprises to provide low-cost high-quality storage services. The proposed methods and system provide valuable reference for the implementation of cloud storage system. The proposed method includes a Linux based cloud.
2014
Abstract. The ever growing technology has resulted in the need for storing and processing excessively large amounts of data on cloud. The current volume of data is enormous and is expected to replicate over 650 times by the year 2014, out of which, 85% would be unstructured. This is known as the ‘Big Data’ problem. The techniques of Hadoop, an efficient resource scheduling method and a probabilistic redundant scheduling, are presented for the system to efficiently organize "free" computer storage resources existing within enterprises to provide low-cost high-quality storage services. The proposed methods and system provide valuable reference for the implementation of cloud storage system. The proposed method includes a Linux based cloud.
Cornell University - arXiv, 2022
In this paper, a technology for massive data storage and computing named Hadoop is surveyed. Hadoop consists of heterogeneous computing devices like regular PCs abstracting away the details of parallel processing and developers can just concentrate on their computational problem. A Hadoop cluster is made of two parts: HDFS and MapReduce. Hadoop cluster uses HDFS for data management. HDFS provides storage for input and output data in MapReduce jobs and is designed with abilities like highfault tolerance, highdistribution capacity and highthroughput. It is also suitable for storing Terabyte or Petabyte data on cluster and it runs on flexible hardware like commodity devices.
International Journal of Computer Applications, 2015
There is an explosion in the volume of data in the world. The amount of data is increasing by leaps and bounds. The sources are individuals, social media, organizations, etc. The data may be structured, semi-structured or unstructured. Gaining knowledge from this data and using it for competitive advantage is the primary focus of all the organizations. In the last few years Big Data has found its way in almost every field, from government to private sectors, industry to academia. The major challenges associated with Big Data are data organization, modeling, data analysis and retrieval. Hadoop is a widely used software framework used for the large scale management and analysis of data. The main components of Hadoop: HDFS and MapReduce, enable the distributed storage and processing of data over a large number of commodity servers. This paper provides an overview of MapReduce and its capabilities and discusses the related issues.
2014 IEEE International Advance Computing Conference (IACC), 2014
Hadoop is an open source cloud computing platform of the Apache Foundation that provides a software programming framework called MapReduce and distributed file system, HDFS. It is a Linux based set of tools that uses commodity hardware, which are relatively inexpensive, to handle, analyze and transform large quantity of data. Hadoop Distributed File System, HDFS, stores huge data set reliably and streams it to user application at high bandwidth and MapReduce is a framework that is used for processing massive data sets in a distributed fashion over a several machines. This paper gives a brief overview of Big Data, Hadoop MapReduce and Hadoop Distributed File System along with its architecture.
2016
Cloud computing is joined with a new model for supplying of computing infrastructure. Big Data management has been specified as one of the momentous technologies for the next years. This paper shows a comprehensive survey of different approaches of data management applications using MapReduce. The open source framework implementing the MapReduce algorithm is Hadoop. We simulate the different design examples of the MapReduce which stored on the cloud. This paper proposes the application of MapReduce which runs on a huge cluster of machines, in Hadoop framework. The proposed implantation methodology is highly scalable and easy to use for non professional users. The main objective is to improve the performance of the MapReduce data management system in the basis of the Hadoop framework. Simulation result shows the effectiveness of the proposed implementation methodology for the MapReduce.
This paper is deals with Parallel Distributed system. Hadoop has become a central platform to store big data through its Hadoop Distributed File System (HDFS) as well as to run analytics on this stored big data using its MapReduce component. Map Reduce programming model have shown great value in processing huge amount of data. Map Reduce is a common framework for data-intensive distributed computing of batch jobs. Hadoop Distributed File System (HDFS) is a Java-based file system that provides scalable and reliable data storage that is designed to span large clusters of commodity servers. In all Hadoop implementations, the default FIFO scheduler is available where jobs are scheduled in FIFO order with support for other priority based schedulers also. During this paper, we are going to study a Hadoop framework, HDFS design and Map reduce Programming model. And also various schedulers possible with Hadoop and provided some behavior of the current scheduling schemes in Hadoop on a locally deployed cluster is described.
The MapReduce model has become an important parallel processing model for largescale data-intensive applications like data mining and web indexing. Hadoop, an opensource implementation of MapReduce, is widely applied to support cluster computing jobs requiring low response time. The different issues of Hadoop are discussed here and then for them what are the solutions which are proposed in the various papers which are studied by the author are discussed here. Finally, Hadoop is not an easy environment to manage. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Network delays due to data movement during running time have been ignored in the recent Hadoop research. Unfortunately, both the homogeneity and data locality assumptions in Hadoop are optimistic at best and unachievable at worst, introduces performance problems in virtualized data centers. The analysis of SPOF existing in critical nodes of Hadoop and proposes a metadata replication based solution to enable Hadoop high availability. The goal of heterogeneity can be achieved by a data placement scheme which distributes and stores data across multiple heterogeneous nodes based on their computing capacities. Analysts said that IT using the technology to aggregate and store data from multiple sources can create a whole slew of problems related to access control and ownership. Applications analyzing merged data in a Hadoop environment can result in the creation of new datasets that may also need to be protected.
Foundation of Computer Applications, 2019
The widespread popularity of Cloud computing as a preferred platform for the deployment of web applications have resulted in an enormous number of applications moving to the cloud, and the huge success of cloud service providers. The data center storage management plays a vital role in cloud computing environments. Especially the PC cluster-based data storage is necessary to manage data on low cost storage servers in which storage space can be reduced. The paper presents the "Map Reduce" and "Hadoop" as Big Data systems that support the processing of large sets of data in a cloud computing environment. This system presents an efficient data storage approach to push work out to many nodes in a cluster using Hadoop File System (HDFS) with variable chunk size to facilitate massive data processing and introduces the implementation enhancement on MapReduce model with BW Transform to reduce the amount of data redundancy and improves the scalability to keep on working with the amount of existing physical storage capacity when the number of users and files are increased.
Big data plays a major role in all aspects of business and IT infrastructure. Today many organizations, Social Media Networking Sites, E-commerce, Educational institution, satellite communication, Aircrafts and others generate huge volume of data on a daily basis. This data is in the form of structured, semi-structured and unstructured. So this huge voluminous amount of data is coined as big data. These big data should be stored and processed in the effective manner. But, in the traditional distributed system this data cannot be effectively handled because of lack of resources. So the term Hadoop comes in to the picture. Hadoop stores and process the huge voluminous amount of data with their strong Hadoop ecosystem. It contains many modules for processing the data, storing the data, allocating the resources, Configuration Management, retrieving the data and for providing highly fault tolerance mechanism. In this paper it focuses on big data concepts, characteristics, real time examples of big data, Hadoop Modules and their pros and cons.
Today, we " re surrounded by data like oxygen. The exponential growth of data first presented challenges to cutting-edge businesses such as Google, Yahoo, Amazon, Microsoft, Facebook, Twitter etc. Data volumes to be processed by cloud applications are growing much faster than computing power. This growth demands new strategies for processing and analyzing information. Hadoop-MapReduce has become a powerful Computation Model addresses to these problems. Hadoop HDFS became more popular amongst all the Big Data tools as it is open source with flexible scalability, less total cost of ownership & allows data stores of any form without the need to have data types or schemas defined. Hadoop MapReduce is a programming model and software framework for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes. In this paper I have provided an overview, architecture and components of Hadoop, HCFS (Hadoop Cluster File System) and MapReduce programming model, its various applications and implementations in Cloud Environments.
Semiconductor science and information devices, 2022
The data and internet are highly growing which causes problems in management of the big-data. For these kinds of problems, there are many software frameworks used to increase the performance of the distributed system. This software is used for the availability of large data storage. One of the most beneficial software frameworks used to utilize data in distributed systems is Hadoop. This paper introduces Apache Hadoop architecture, components of Hadoop, their significance in managing vast volumes of data in a distributed system. Hadoop Distributed File System enables the storage of enormous chunks of data over a distributed network. Hadoop Framework maintains fsImage and edits files, which supports the availability and integrity of data. This paper includes cases of Hadoop implementation, such as monitoring weather, processing bioinformatics.
Big data is dataset that having the ability to capture, manage & process the data in elapsed time .Managing the data is the big issue. And now days the huge amount of data is produced in the origination so the big data concept is in picture. It is data set that can manage and process the data. For managing the data the big data there are many technique are used .One of this technique is Hadoop. Hadoop can handle the huge amount of data, it is very cost effective, and it can handle huge amount of data so processing speed is very fast, and also it can create a duplicate copy of data in case of system failure or to prevent the loss of data.This paper contains the Introduction of big data and Hadoop, characteristics of big data ,problem associated with big data, architecture of big data and Hadoop, other component of hadoop, advantages, disadvantages and applications of Hadoop and also the conclusion.
Asian Journal of Research in Computer Science, 2021
In the last few days, data and the internet have become increasingly growing, occurring in big data. For these problems, there are many software frameworks used to increase the performance of the distributed system. This software is used for available ample data storage. One of the most beneficial software frameworks used to utilize data in distributed systems is Hadoop. This software creates machine clustering and formatting the work between them. Hadoop consists of two major components: Hadoop Distributed File System (HDFS) and Map Reduce (MR). By Hadoop, we can process, count, and distribute each word in a large file and know the number of affecting for each of them. The HDFS is designed to effectively store and transmit colossal data sets to high-bandwidth user applications. The differences between this and other file systems provided are relevant. HDFS is intended for low-cost hardware and is exceptionally tolerant to defects. Thousands of computers in a vast cluster both have directly associated storage functions and user programmers. The resource scales with demand while being cost-effective in all sizes by distributing storage and calculation through numerous servers. Depending on the above characteristics of the HDFS, many researchers worked in this field trying to enhance the performance and efficiency of the addressed file system to be one of the most active cloud systems. This paper offers an adequate study to review the essential investigations as a trend beneficial for researchers wishing to operate
Hadoop and Map Reduce are the most efficient tools which are used to reduce the complexity of maintaining the big data set. MapReduce has been introduced by Google and it is an open source counterpart. Hadoop is focused for parallelizing computing in large distributed clusters of commodity machines. Thus the parallelizing data processing tool MapReduce has been gaining significance moment from both academy and industries. The objective of this survey is to study MapReduce with different algorithms to improve the performance in large dataset.
Main aim of invention of Hadoop is to process of big data very efficiently. Nowadays, web is generating lots of information on a daily basis, and it is highly require and difficult to manage billion of pages of content. This paper will clearly describe the evolution of hadoop, its need and uses. Detail study of hadoop framework and its concepts to open source software to support distributed computing. Hadoop also includes a Distributed File System (HDFS), which manages distributed data on different node and Map-Reduce for programming paradigm.
This paper provides an advanced understanding of the concept Big Data and its components along with an indepth analysis of the Hadoop framework, a tool used to manage Big Data. This is essentially achieved by showcasing a High-performance tool for Big Data using Hadoop framework and then analyzing various tasks on the cluster i.e., the internal architecture of Hadoop along with resource analysis. This article also embeds ideas that are essential for us to envision prospects in data management techniques. The project also deploys the cluster management process over containerized application using a docker engine. In this article, an in-depth analysis of HDFS and Map Reduce frameworks is followed by a brief showcasing of the contrast between the current deployment of Big data using distributed computing and a rather futuristic version of it using containerized applications.
Data is getting bigger and bigger in size that is called as Big Data. Big Data may be structured, unstructured and semi structured. Traditional systems are not good to manage this huge amount of data. So, it is required to use best sources to manage this Big Data. Hadoop is Highly Archived Distributed Object Oriented Programming tool which is an open source software platform. Hadoop is written Java. It is used to store and manage large amount of data. In this paper configuration of Hadoop single node cluster is explained. Hardware and software requirements are also described. Some running commands are also explained for Hadoop. Map Reduce job of Hadoop also presented.
— Hadoop is a quickly budding ecosystem of components based on Google's MapReduce algorithm and file system work for implementing MapReduce algorithms in a scalable fashion and distributed on commodity hardware. Hadoop enables users to store and process large volumes of data and analyze it in ways not previously possible with SQL-based approaches or less scalable solutions. Remarkable improvements in conventional compute and storage resources help make Hadoop clusters feasible for most organizations. This paper begins with the discussion of Big Data evolution and the future of Big Data based on Gartner's Hype Cycle. We have explained how Hadoop Distributed File System (HDFS) works and its architecture with suitable illustration. Hadoop's MapReduce paradigm for distributing a task across multiple nodes in Hadoop is discussed with sample data sets. The working of MapReduce and HDFS when they are put all together is discussed. Finally the paper ends with a discussion on Big Data Hadoop sample use cases which shows how enterprises can gain a competitive benefit by being early adopters of big data analytics.
There has been a quick progress in cloud, with the growing amounts of associations turning number of associations relying upon use resources in the cloud; there is a requirement for securing the data of various customers using concentrated resource. Circulated capacity organizations avoid the cost stockpiling organizations dodges the cost exorbitant on programming, staff keeps up and gives better execution less limit cost and flexibility, cloud advantages through web which construct their presentation to limit security vulnerabilities however security is one of the critical weaknesses that balancing incomprehensible relationship to go into appropriated processing environment. The Proposed wear down HADOOP stockpiling strategies, Map reduces approach with synchronization between tasks and this purpose of interest and its impediments.
2019
Hadoop is framework that is processing data with large volume that cannot be processed by conventional systems. Hadoop has management le system called Hadoop Distributed File System (HDFS) that has NameNode and DataNode where the data is divided into blocks based on the total size of dataset. In addition, Hadoop has MapReduce where the dataset is processed in Mapping phase and then reducing phase. Using Hadoop for big data analysis has been revealed important information that can be used for analytical purpose and enabling new products. Big data could be found in many different resources such as social networks, web server logs, broadcast audio streams and banking transactions. In this paper, we illustrated the main steps to setup Hadoop and MapReduce. The illustrated version in this work is the latest released of Hadoop 3.1.1 for big data analysis. A simpli ed pseudo code is provided to show the functionality of Map class and reduce class. The developed steps are applied with a given example that could be generalized with bigger data.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.