Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017, NoSQL: Database for Storage and Retrieval of Data in Cloud
https://doi.org/10.1201/9781315155579-15…
21 pages
1 file
AI-generated Abstract
This paper examines the security issues and privacy challenges encountered by NoSQL databases. It highlights key security functions, including vulnerability assessment, auditing, and the need for encryption measures. A comparative analysis of NoSQL databases like MongoDB, Cassandra, and Neo4j reveals that many lack adequate security features, emphasizing the importance of implementing robust security strategies to protect sensitive data.
The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.
As sensor network deployments grow and mature there emerge a common set of operations and transformations. These can be grouped into a conceptual framework called Sensor Web. Sensor Web combines cyber infrastructure with a Service Oriented Architecture (SOA) and sensor networks to provide access to heterogeneous sensor resources in a deployment independent manner. In this chapter we present the Open Sensor Web Architecture (OSWA), a platform independent middleware for developing sensor applications. OSWA is built upon a uniform set of operations and standard data representations as defined in the Sensor Web Enablement Method (SWE) by the Open Geospatial Consortium (OGC). OSWA uses open source and grid technologies to meet the challenging needs of collecting and analyzing observational data and making it accessible for aggregation, archiving and decision making.
2005
The Databases and Distributed Systems Group at Technische Universität Darmstadt is devoted to research in the areas of data management middleware and reactive, event-based systems. Special emphasis is placed on handling the flow of data and events in a variety of environments: publish/subscribe mechanisms, information dissemination and integration, ubiquitous computing, peerto-peer infrastructures, and a variety of sensor-based systems ranging from passive RFID infrastructures to active wireless sensor networks. A special concern is placed on non-functional aspects of the middleware, such as performance, scalability and security, where members of our group are involved in the definition of the SPEC family of benchmarks for J2EE (SPECjAppServer200x) and JMS.
Journal of Software: Evolution and Process, 2019
Cyber physical system (CPS) applications are widely used to control critical infrastructure of various application domains, eg, medical health care, energy, and power, to name a few. Such applications usually take input data from sensors, estimate current state of the system, and then based on the estimation, make critical decisions to control the underlying infrastructure automatically. Therefore, security and integrity of the (system state) data are critically important to ensure safe operations of CPS. In this paper, we present a review of security of various data management systems used in CPS. Since CPS are composed of systems of (sub)systems that generate a huge amount of data (ie, periodical sensor input data), therefore, recently, NoSQL and NewSQL data management systems have emerged as popular data management systems to support efficient and scalable analysis of unstructured data. Unfortunately, these systems were not initially build for data security and thus are vulnerable to numerous security attacks. Considering flexible data model and efficient access methods in NoSQL and NewSQL, we discuss the security attacks on such data management systems and their corresponding solutions to mitigate them. In particular, we analyze the system and data security of popular NoSQL and NewSQL systems. To analyze that, we defined feature vectors for system and data security and compared the data systems against them. Finally, we propose security solutions for data management systems by identifying various security vulnerabilities in internal security algorithms of such systems.
2011
Remote Instrumentation Services go far beyond offering networked access to remote instrument resources. They are establishing as a way of fully integrating instruments (including laboratory equipment, large-scale experimental facilities, and sensor networks) in a Service Oriented Architecture, where users can view and operate them in the same fashion with computing and storage resources. The deployment of test beds for a large basis of scientific instrumentation and e-Science applications is mandatory to develop new functionalities to be embedded in the existing middleware to enable such integration, to test them on the field, and to promote their usage in scientific communities. The DORII (Deployment of Remote Instrumentation Infrastructure) project is a major effort in this direction. The paper presents the performance monitoring infrastructure that has been built in DORII and the results concerning a selected application in seismic engineering.
IEEE Pervasive Computing, 2007
Fusion Engineering and Design, 2014
We present a complex data handling system for the COMPASS tokamak, operated by IPP ASCR Prague, Czech Republic [1]. The system, called CDB (Compass DataBase), integrates different data sources as an assortment of data acquisition hardware and software from different vendors is used. Based on widely available open source technologies wherever possible, CDB is vendor and platform independent and it can be easily scaled and distributed. The data is directly stored and retrieved using a standard NAS (Network Attached Storage), hence independent of the particular technology; the description of the data (the metadata) is recorded in a relational database. Database structure is general and enables the inclusion of multi-dimensional data signals in multiple revisions (no data is overwritten). This design is inherently distributed as the work is off-loaded to the clients. Both NAS and database can be implemented and optimized for fast local access as well as secure remote access. CDB is implemented in Python language; bindings for Java, C/C++, IDL and Matlab are provided. Independent data acquisitions systems as well as nodes managed by FireSignal [2] are all integrated using CDB. An automated data post-processing server is a part of CDB. Based on dependency rules, the server executes, in parallel if possible, prescribed post-processing tasks.
Software and Cyberinfrastructure for Astronomy III, 2014
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. This paper describes the experience gained after several years working with the monitoring system, which has a strong requirement of collecting and storing up to 150K variables with a highest sampling rate of 20.8 kHz. The original design was built on top of a cluster of relational database server and network attached storage with fiber channel interface. As the number of monitoring points increases with the number of antennas included in the array, the current monitoring system demonstrated to be able to handle the increased data rate in the collection and storage area (only one month of data), but the data query interface showed serious performance degradation. A solution based on no-SQL platform was explored as an alternative to the current long-term storage system. Among several alternatives, mongoDB has been selected. In the data flow, intermediate cache servers based on Redis were introduced to allow faster streaming of the most recently acquired data to web based charts and applications for online data analysis.
2012
This paper examines the importance of secure structures in the process of analyzing and distributing information with aid of Grid-based technologies.The advent of distributed network has provided many practical opportunities for detecting and recording the time of events, and made efforts to identify the events and solve problems of storing information such as beingup-to-date and documented.In this regard, the data distribution systems in a network environment should be accurate. As a consequence, a series ofcontinuous and updated data must be at hand. In this case, Grid is the best answer to use data and resource of organizations by common processing.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Journal of Physics: Conference Series, 2012
Proceedings of the 2011 TeraGrid Conference on Extreme Digital Discovery - TG '11, 2011
Future Generation Computer Systems, 2013
Journal of Physics: Conference Series, 2017
Proceedings of the 5th ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and Earthquake Engineering, COMPDYN 2015, 2015