Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009
AI
The Pierre Auger Observatory measures cosmic rays at unparalleled energies, utilizing a sophisticated combination of ground-based water Cherenkov detectors and airborne fluorescence light observations to analyze extensive air showers. This paper reviews the operational monitoring systems in place to ensure optimal function of these detectors, along with systematic aerosol and cloud measurements collected over several years, supporting the accuracy necessary for high-energy cosmic ray detection. Key findings illustrate the relationship between atmospheric conditions and their effects on data collection, leading to improved methodologies in cosmic ray research.
Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021), 2021
We present the current development of the Monitoring, Logging and Alarm subsystems in the framework of the Array Control and Data Acquisition System (ACADA) for the Cherenkov Telescope Array (CTA). The Monitoring System (MON) is the subsystem responsible for monitoring and logging the overall array (at each of the CTA sites) through the acquisition of monitoring and logging information from the array elements. The MON allows us to perform a systematic approach to fault detection and diagnosis supporting corrective and predictive maintenance to minimize the downtime of the system. We present a unified tool for monitoring data items from the telescopes and other devices deployed at the CTA array sites. Data are immediately available for the operator interface and quick-look quality checks and stored for later detailed inspection. The Array Alarm System (AAS) is the subsystem that provides the service that gathers, filters, exposes, and persists alarms raised by both the ACADA processes and the array elements supervised by the ACADA system. It collects alarms from the telescopes, the array calibration, the environmental monitoring instruments and the ACADA systems. The AAS subsystem also creates new alarms based on the analysis and correlation of the system software logs and the status of the system hardware providing the filter mechanisms for all the alarms. Data from the alarm system are then sent to the operator via the human-machine interface.
ESO Astrophysics Symposia European Southern Observatory, 2008
Many detectors, optical CCDs and IR arrays, are currently in operation onboard ESO instruments at the La Silla Paranal Observatory. A unified scheme for optical detector characterization has been adopted since several years in La Silla, and it is used by the Science Operation team to monitor the 18 CCDs belonging to the eight instruments operated by ESO at the Observatory. This scheme has been proven successful in ensuring a high quality performance of the detectors along the years. In Paranal the science operation team and QC Garching monitor the performance of the detectors using instrument-specific data reduction pipelines.
2009
To ensure smooth operation of the Pierre Auger Observatory a monitoring tool has been developed. Data from different sources, e.g. the detector components, are collected and stored in a single database. The shift crew and experts can access these data using a web interface that displays generated graphs and specially developed visualisations. This tool offers an opportunity to monitor the long term stability of some key quantities and of the data quality. Quantities derived such as the on-time of the fluorescence telescopes can be estimated in nearly real-time and added to the database for further analysis. In addition to access via the database server the database content is distributed in packages allowing a wide range of analysis off-site. A new functionality has been implemented to manage maintenance and intervention in the field using the web interface. It covers the full work-flow from an alarm being raised to the issue being resolved.
Journal of Physics: Conference Series, 2017
Over the past two years, the operations at CNAF, the ICT center of the Italian Institute for Nuclear Physics, have undergone significant changes. The adoption of configuration management tools, such as Puppet, and the constant increase of dynamic and cloud infrastructures have led us to investigate a new monitoring approach. The present work deals with the centralization of the monitoring service at CNAF through a scalable and highly configurable monitoring infrastructure. The selection of tools has been made taking into account the following requirements given by users: (I) adaptability to dynamic infrastructures, (II) ease of configuration and maintenance, capability to provide more flexibility, (III) compatibility with existing monitoring system, (IV) re-usability and ease of access to information and data. In the paper, the CNAF monitoring infrastructure and its related components are hereafter described: Sensu as monitoring router, InfluxDB as time series database to store data gathered from sensors, Uchiwa as monitoring dashboard and Grafana as a tool to create dashboards and to visualize time series metrics.
framework
presented by Michele Gulmini (Michele.Gulmini@cern.ch)
Advances in Space Research, 2009
Huge magnetic clouds of plasma emitted by the Sun dominate intense geomagnetic storm occurrences and simultaneously they are correlated with variations of spectra of particles and nuclei in the interplanetary space, ranging from subtermal solar wind ions till GeV energy galactic cosmic rays. For a reliable and fast forecast of Space Weather world-wide networks of particle detectors are operated at different latitudes, longitudes, and altitudes. Based on a new type of hybrid particle detector developed in the context of the International Heliophysical Year (IHY 2007) at Aragats Space Environmental Center (ASEC) we start to prepare hardware and software for the first sites of Space Environmental Viewing and Analysis Network (SEVAN). In the paper the architecture of the newly developed data acquisition system for SEVAN is presented. We plan to run the SEVAN network under one-and-the-same data acquisition system, enabling fast integration of data for on-line analysis of Solar Flare Events. An Advanced Data Acquisition System (ADAS) is designed as a distributed network of uniform components connected by Web Services. Its main component is Unified Readout and Control Server (URCS) which controls the underlying electronics by means of detector specific drivers and makes a preliminary analysis of the on-line data. The lower level components of URCS are implemented in C and a fast binary representation is used for the data exchange with electronics. However, after preprocessing, the data are converted to a self-describing hybrid XML/Binary format. To achieve better reliability all URCS are running on embedded computers without disk and fans to avoid the limited lifetime of moving mechanical parts. The data storage is carried out by means of high performance servers working in parallel to provide data security. These servers are periodically inquiring the data from all URCS and storing it in a MySQL database. The implementation of the control interface is based on high level web standards and, therefore, all properties of the system can be remotely managed and monitored by the operators using web browsers. The advanced data acquisition system at ASEC in Armenia was started in November, 2006. The reliability of the multi-client service was proven by continuously monitoring neutral and charged cosmic ray particles. Seven particle monitors are located at 2000 and 3200 m above sea level at a distance of 40 and 60 km from the main data server.
2003
This paper presents the materialization of a monitoring system of the high power machines distributed on a large area in carbon exploitations, revealing the system's architecture with the component elements: The Subsystem designed for data acquisition represented by the data collecting equipment (ECD), which is assembled on each monitored machine and The Operator subsystem which takes over, processes and monitors data from ECD equipments, and revealing also the central monitoring software structure and the specific programs of the local equipment ECD. Through the communication server, the system takes data from all the equipments from the carbon exploitation and allows the visualization of the synoptic scheme of the technological flux on all computers, having this right, connected in the same network with the server. The used programs allow the obtaining of some reports referring at the machine's functioning (individually or on technological lines), cumulated or journal type...
Journal of Physics: Conference Series, 2010
In this paper we give a description of the database services for the control and monitoring of the electromagnetic calorimeter of the CMS experiment at LHC. After a general description of the software infrastructure, we present the organization of the tables in the database, that has been designed in order to simplify the development of software interfaces. This feature is achieved including in the database the description of each relevant table. We also give some estimation about the final size and performance of the system.
Journal of Physics: Conference Series, 2017
The Compact Muon Solenoid is a large a complex general purpose experiment at the CERN Large Hadron Collider (LHC), built and maintained by many collaborators from around the world. Efficient operation of the detector requires widespread and timely access to a broad range of monitoring and status information. To that end the Web Based Monitoring (WBM) system was developed to present data to users located anywhere from many underlying heterogeneous sources, from real time messaging systems to relational databases. This system provides the power to combine and correlate data in both graphical and tabular formats of interest to the experimenters, including data such as beam conditions, luminosity, trigger rates, detector conditions, and many others, allowing for flexibility on the user's side. This paper describes the WBM system architecture and describes how the system has been used from the beginning of data taking until now (Run1 and Run 2).
2020
The Cherenkov Telescope Array (CTA) project is the initiative to build the next-generation gamma-ray observatory. With more than 100 telescopes planned to be deployed in two sites, CTA is one of the largest astronomical facilities under construction. The Array Control and Data Acquisition (ACADA) system will be the central element of on-site CTA Observatory operations. The mission of the ACADA system is to manage and optimize the telescope array operations at each of the CTA sites. To that end, ACADA will provide all necessary means for the efficient execution of observations, and for the handling of the several Gb/s generated by each individual CTA telescope. The ACADA system will contain a real-time analysis pipeline, dedicated to the automatic generation of science alert candidates based on the inspection of data being acquired. These science alerts, together with external alerts arriving from other scientific installations, will permit ACADA to modify ongoing observations at sub...
Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 2004
In January 22, 2002, KamLAND started the data-taking. The KamLAND detector is a complicated system which consists of liquid scintillator, buffer oil, spherical balloon and so on. In order to maintain the detector safety, we constructed monitoring system which collect detector status information such as balloon weight, liquid scintillator oil level and so on. In addition, we constructed continuous Rn monitoring system for the 7 Be solar neutrino detection. The KamLAND monitoring system consists of various network, LON, 1-Wire, and TCP/IP, and these are indispensable for continuous experimental data acquisition.
Computer Physics Communications, 2020
The KM3NeT Collaboration runs a multi-site neutrino observatory in the Mediterranean Sea. Water Cherenkov particle detectors, deep in the sea and far off the coasts of France and Italy, are already taking data while incremental construction progresses. Data Acquisition Control software is operating offshore detectors as well as testing and qualification stations for their components. The software, named Control Unit, is highly modular. It can undergo upgrades and reconfiguration with the acquisition running. Interplay with the central database of the Collaboration is obtained in a way that allows for data taking even if Internet links fail. In order to simplify the management of computing resources in the long term, and to cope with possible hardware failures of one or more computers, the KM3NeT Control Unit software features a custom dynamic resource provisioning and failover technology, which is especially important for ensuring continuity in case of rare transient events in multi-messenger astronomy. The software architecture relies on ubiquitous tools and broadly adopted technologies and has been successfully tested on several operating systems. Void NotifyTargetChange token,mode String GetTarget token Void Disable token,clb,subsystem Void Enable token,clb,subsystem Bool IsEnabled token,clb,subsystem Void NotifyRunsetupChange token,runsetup,runnumber,runstartunixtime,t0set Void NotifyRunnumberChange token,runnumber,runstartunixtime,t0set String CurrentDetector token String CurrentRunsetup token Long CurrentRunNumber token Void NotifyDetectorChange token,detector Void NotifyTriDASOverrideChange token Void NotifyOpticalDataTargetChange token String GetAuthenticationManager token Void Terminate token Call method
Journal of Physics: Conference Series, 2012
The CMS detector control system (DCS) is responsible for controlling and monitoring the detector status and for the operation of all CMS sub detectors and infrastructure. This is required to ensure safe and efficient data taking so that high quality physics data can be recorded. The current system architecture is composed of more than 100 servers in order to provide the required processing resources. An optimization of the system software and hardware architecture is under development to ensure redundancy of all the controlled subsystems and to reduce any downtime due to hardware or software failures. The new optimized structure is based mainly on powerful and highly reliable blade servers and makes use of a fully redundant approach, guaranteeing high availability and reliability. The analysis of the requirements, the challenges, the improvements and the optimized system architecture as well as its specific hardware and software solutions are presented.
2021
The construction of the first stage of the Pierre Auger Observatory, designed for research of ultra-high energy cosmic rays, began in 2001 with a prototype system. The Observatory has been collecting data since early 2004 and was completed in 2008. The Observatory is situated at 1400 m above sea level near Malargüe, (Mendoza province) in western Argentina, covering a vast plain of 3000 km 2 , known as the Pampa Amarilla. The Observatory consists of a hybrid detector, in which there are 1660 water-Cherenkov stations, forming the Surface Detector (SD) and 27 peripheral atmospheric fluorescence telescopes, comprising the Fluorescence Detector (FD). Over time, the Auger Observatory has been enhanced with different R&D prototypes and is recently being to an important upgrade called AugerPrime. In the present contribution, the general operations of the SD and FD will be described. In particular the FD shift procedure-executable locally in Malargüe or remotely by teams in control rooms abroad within the Collaboration-and the newly SD shifts (operating since 2019) will be explained. Additionally, the SD and FD maintenance campaigns, as well as the data taking and data handling at a basic level, will be reported.
Acta Physica Hungarica, 1986
In a nuclear facility a round-the clock surveillance of the radiation levels in selected areas is necessary and usually required by the national regulatory institutions. In addition the instrument readings have to be recorded automatically and stored for a longer period, typically 10 years. One problem with such systems is the accumulation of a large amount of radiation data which are very difficult to review and store. Usually only a small fraction of data (i.e. when exceeding preset alarm levels) are of interest while the rest of the data are more or less unnecessary although their storage is required for continuous documentation. As a compromise between storage of all data but immediate information on important data (i.e. alarms) an area surveillance system has been designed and installed at the TRIGA reactor Vienna using a data logger and a personal computer as a storage facility. In total 16 radiation monitor signals are permanently scanned by a multiplexer for deviation from n...
Many transmitters (pressure, level and flow) are used in a nuclear power plant. It is necessary to calibrate them periodically to ensure that their measurements are accurate. These calibration tasks are time consuming and often contribute to worker radiation exposure. Human errors can also sometimes degrade their performance since the calibration involves intrusive techniques. More importantly, experience has shown that the majority of current calibration efforts are not necessary. These facts motivated the nuclear industry to develop new technologies for identifying drifting instruments. These technologies, well known as on-line monitoring (OLM) techniques, are non-intrusive and allow focusing the maintenance efforts on the instruments that really need a calibration. Although few OLM systems have been implemented in some PWR and BWR plants, these technologies are not commonly used and have not been permanently implemented in a CANDU plant. This paper presents the results of a research project that has been performed in a CANDU plant in order to validate the implementation of an OLM system. An application project, based on the ICMP algorithm developed by EPRI, has been carried out in order to evaluate the performance of an OLM system. The results demonstrated that the OLM system was able to detect the drift of an instrument in the majority of the studied cases. A feasibility study has also been completed and has demonstrated that the implementation of an OLM system at a CANDU nuclear power plant could be advantageous under certain conditions.
Journal of Physics: Conference Series, 2008
For a reliable and timely forecast of Space Weather world-wide networks of particle detectors are located at different latitudes, longitudes and altitudes. To provide better integration of these networks the data acquisition system is facing a challenge to establish reliable data exchange between multiple network nodes which are often located in hardly accessible locations and operated by small research groups. In this article we want to present a data acquisition system for new establishing SEVAN (Space Environmental Viewing and Analysis Network) elaborated on top of free open-source technologies. Our solution is organized as a distributed network of uniform components connected by standard interfaces. The main component is URCS (Unified Readout and Control Server) which controls frontend electronics, collects data and makes preliminary analysis. The URCS operates fully autonomous. Essential characteristics of software components and electronics are remotely controllable via a dynamic web interface, the data is stored locally for certain amount of time and distributed on request to other nodes over web services. To simplify data exchange with collaborating groups we are using an extensible XML based format for data dissemination. The data acquisition system at Aragats Space Environmental Center in Armenia was started November, 2006. Seven particle monitors are located at 2000 and 3200 meters above sea level at a distance of 40 and 60 km from data analysis servers in Yerevan, Armenia. The reliability of the service was proofed by continuous monitoring of incident cosmic ray flux.
2007
The CMS experiment at the LHC at CERN will start taking data in 2008. To configure, control and monitor the experiment during data-taking the Run Control and Monitoring System (RCMS) was developed. This paper describes the architecture and the technology used to implement the RCMS, as well as the deployment and commissioning strategy of this important component of the online software for the CMS experiment.
2011
The Surface Array Detector of the Pierre Auger Observatory consists of about 1600 water Cherenkov detectors. The operation of each station is continuosly monitored with respect to its individual components like batteries and solar panels, aiming at the diagnosis and the anticipation of failures. In addition, the evolution with time of the response and of the trigger rate of each station is recorded. The behavior of the earliest deployed stations is used to predict the future performance of the full array.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.