Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016
…
9 pages
1 file
Computer simulations have become a very powerful tool for scientific research. Given the vast complexity that comes with many open scientific questions, a purely analytical or experimental approach is often not viable. For example, biological systems (such as the human brain) comprise an extremely complex organization and heterogeneous interactions across different spatial and temporal scales. In order to facilitate research on such problems, the BioDynaMo project (<https://biodynamo.web.cern.ch/>) aims at a general platform for computer simulations for biological research. Since the scientific investigations require extensive computer resources, this platform should be executable on hybrid cloud computing systems, allowing for the efficient use of state-of-the-art computing technology. This paper describes challenges during the early stages of the software development process. In particular, we describe issues regarding the implementation and the highly interdisciplinary as w...
2016
Computer simulations have become a very powerful tool for scientific research. Given the vast complexity that comes with many open scientific questions, a purely analytical or experimental approach is often not viable. For example, biological systems (such as the human brain) comprise an extremely complex organization and heterogeneous interactions across different spatial and temporal scales. In order to facilitate research on such problems, the BioDynaMo project (https://biodynamo.web.cern.ch/) aims at a general platform for computer simulations for biological research. Since the scientific investigations require extensive computer resources, this platform should be executable on hybrid cloud computing systems, allowing for the efficient use of stateof-the-art computing technology. This paper describes challenges during the early stages of the software development process. In particular, we describe issues regarding the implementation and the highly interdisciplinary as well as international nature of the collaboration. Moreover, we explain the methodologies, the approach, and the lessons learnt by the team during these first stages.
Concepts, Methodologies, Tools, and Applications
This chapter is a review of the literature related to the use of cloud-based computer simulations in scientific research. The authors examine the types and good examples of cloud-based computer simulations, offering suggestions for the architecture, frameworks, and runtime infrastructures that support running simulations in cloud environment. Cloud computing has become the standard for providing hardware and software infrastructure. Using the possibilities offered by cloud computing platforms, researchers can efficiently use the already existing IT resources in solving computationally intensive scientific problems. Further on, the authors emphasize the possibilities of using the existing and already known simulation models and tools in the cloud computing environment. The cloud environment provides possibilities to execute all kinds of simulation experiments as in traditional environments. This way, models are accessible to a wider range of researchers and the analysis of data resulting from simulation experiments is significantly improved.
Briefings in Bioinformatics, 2013. doi:10.1093/bib/bbt040 , 2013
The stochastic modelling of biological systems, coupled with Monte Carlo simulation of models, is an increasingly popular technique in bioinformatics. The simulation-analysis workflow may result computationally expensive reducing the interactivity required in the model tuning. In this work, we advocate the high-level software design as a vehicle for building efficient and portable parallel simulators for the cloud. In particular, the Calculus of Wrapped Components (CWC) simulator for systems biology, which is designed according to the FastFlow pattern-based approach, is presented and discussed. Thanks to the FastFlow framework, the CWC simulator is designed as a high-level workflow that can simulate CWC models, merge simulation results and statistically analyse them in a single parallel workflow in the cloud. To improve interactivity, successive phases are pipelined in such a way that the workflow begins to output a stream of analysis results immediately after simulation is started. Performance and effectiveness of the CWC simulator are validated on the Amazon Elastic Compute Cloud.
2020
Simulator interoperability and extensibility has become a growing requirement in computational biology. To address this, we have developed a federated software architecture. It is federated by its union of independent disparate systems under a single cohesive view, provides interoperability through its capability to communicate, execute programs, or transfer data among different independent applications, and supports extensibility by enabling simulator expansion or enhancement without the need for major changes to system infrastructure. Historically, simulator interoperability has relied on development of declarative markup languages such as the neuron modeling language NeuroML, while simulator extension typically occurred through modification of existing functionality. The software architecture we describe here allows for both these approaches. However, it is designed to support alternative paradigms of interoperability and extensibility through the provision of logical relationshi...
Neural Networks, 2011
For multi-scale and multi-modal neural modeling, it is needed to handle multiple neural models described at different levels seamlessly. Database technology will become more important for these studies, specifically for downloading and handling the neural models seamlessly and effortlessly. To date, conventional neuroinformatics databases have solely been designed to archive model files, but the databases should provide a chance for users to validate the models before downloading them. In this paper, we report our on-going project to develop a cloud-based web service for online simulation called ''Simulation Platform''. Simulation Platform is a cloud of virtual machines running GNU/Linux. On a virtual machine, various software including developer tools such as compilers and libraries, popular neural simulators such as GENESIS, NEURON and NEST, and scientific software such as Gnuplot, R and Octave, are pre-installed. When a user posts a request, a virtual machine is assigned to the user, and the simulation starts on that machine. The user remotely accesses to the machine through a web browser and carries out the simulation, without the need to install any software but a web browser on the user's own computer. Therefore, Simulation Platform is expected to eliminate impediments to handle multiple neural models that require multiple software.
Nucleic Acids Research, 2021
Comprehensive, predictive computational models have significant potential for science, bioengineering, and medicine. One promising way to achieve more predictive models is to combine submodels of multiple subsystems. To capture the multiple scales of biology, these submodels will likely require multiple modeling frameworks and simulation algorithms. Several community resources are already available for working with many of these frameworks and algorithms. However, the variety and sheer number of these resources make it challenging to find and use appropriate tools for each model, especially for novice modelers and experimentalists. To make these resources easier to use, we developed RunBioSimulations (https://run.biosimulations.org), a single web application for executing a broad range of models. RunBioSimulations leverages community resources, including BioSimulators, a new open registry of simulation tools. These resources currently enable RunBioSimulations to execute nine framewo...
International Journal of Advanced Research in Computer Science and Software Engineering
The molecular biology is highly dynamic in nature, innumerable conformational states are accessible at physiological temperatures. Since molecular biology is highly dynamical in nature, the static illustrations of proteins, nucleic acids, and other bio molecular structures are normally printed in books. A given bio molecule also samples a rapidly fluctuating local environment comprised of other biopolymers, small molecules, water, ions, etc. that diffuse to within a few nano-meters, leading to inter-molecular interactions and the formation of supra molecular assemblies. These intra-and inter-molecular contacts are governed by the same physical principles (forces, energetic) that characterize individual molecules and inter-atomic interactions, thereby enabling a unified picture of the physical basis of molecular interactions from a small set of fundamental principles. Computational approaches are well-suited to studies of molecular interactions, from the intra-molecular conformational sampling of individual proteins (such as membrane receptors or ion channels) to the diffusional dynamics and inter-molecular collisions that occur in the early stages of formation of cellular-scale assemblies. To study such phenomena, two major lineages of computational approaches have developed in molecular biology: physics-based methods (often referred to as simulations) and informatics-based approaches (often termed the data-mining or machine learning approach to knowledge extraction via statistical inference). An advantage of the former approach is its physical realism, while an advantage of the latter approach is its potential to illuminate the evolution of a genetically related to group of organisms as distinguished from the development of the individual organism relationships (evolutionary features). This paper highlights the utility of cloud computing for bio-molecular simulation (i.e. the physics based method) and suggests rules for setting up of cloud computing facilities to facilitate the research using bio-molecular simulation.
Journal of Neurology Research Review & Reports, 2023
Reproducibility is a key component of scientific research, and its significance has been increasingly recognized in the field of Neuroscience. This paper explores the origin, need, and benefits of reproducibility in Neuroscience research, as well as the current landscape surrounding this practice, and further adds how boundaries of current reproducibility should be expanded to computing infrastructure. The reproducibility movement stems from concerns about the credibility and reliability of scientific findings in various disciplines, including Neuroscience. The need for reproducibility arises from the importance of building a robust knowledge base and ensuring the reliability of research findings. Reproducible studies enable independent verification, reduce the dissemination of false or misleading results, and foster trust and integrity within the scientific community. Collaborative efforts and knowledge sharing are facilitated, leading to accelerated scientific progress and the translation of research into practical applications. On the data front, we have platforms such as openneuro for open data sharing, on the analysis front we have containerized processing pipelines published in public repos which are reusable. There are also platforms such as openneuro, NeuroCAAS, brainlife etc which caters to the need for a computing platform. However, along with benefits these platforms have limitations as only set types of processing pipelines can be run on the data. Also, in the world of data integrity and governance, it may not be far in the future that some countries may require to process the data within the boundaries limiting the usage of the platform. To introduce customized, scalable neuroscience research, alongside open data, containerized analysis open to all, we need a way to deploy cloud infrastructure required for the analysis with templates. These templates are a blueprint of infrastructure required for reproducible research/analysis in a form of code. This will empower anyone to deploy computational infrastructure on cloud and use data processing pipeline on their own infrastructure of their choice and magnitude. Just as docker files are created for any analysis software developed, an IAC template accompanied with any published analysis pipeline, will enable users to deploy infrastructure on cloud required to carry out analysis on their data.
Nucleic Acids Research
Computational models have great potential to accelerate bioscience, bioengineering, and medicine. However, it remains challenging to reproduce and reuse simulations, in part, because the numerous formats and methods for simulating various subsystems and scales remain siloed by different software tools. For example, each tool must be executed through a distinct interface. To help investigators find and use simulation tools, we developed BioSimulators (https://biosimulators.org), a central registry of the capabilities of simulation tools and consistent Python, command-line and containerized interfaces to each version of each tool. The foundation of BioSimulators is standards, such as CellML, SBML, SED-ML and the COMBINE archive format, and validation tools for simulation projects and simulation tools that ensure these standards are used consistently. To help modelers find tools for particular projects, we have also used the registry to develop recommendation services. We anticipate th...
2020
A major goal of computational neuroscience is the development of powerful data analyses that operate on large datasets. These analyses form an essential toolset to derive scientific insights from new experiments. Unfortunately, a major obstacle currently impedes progress: novel data analyses have a hidden dependence upon complex computing infrastructure (e.g. software dependencies, hardware), acting as an unaddressed deterrent to potential analysis users. While existing analyses are increasingly shared as open source software, the infrastructure needed to deploy these analyses – at scale, reproducibly, cheaply, and quickly – remains totally inaccessible to all but a minority of expert users. In this work we develop Neuroscience Cloud Analysis As a Service (NeuroCAAS): a fully automated analysis platform that makes state-of-the-art data analysis tools accessible to the neuroscience community. Based on modern large-scale computing advances, NeuroCAAS is an open source platform with a ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
PLOS Computational Biology, 2019
Bioinformatics, 2021
Advances in Artificial Life, ECAL 2013, 2013
Summer Comp. Sim. Conf ., 1985
Bioinformatics (Oxford, England), 2014
Future Neurology, 2009
PLOS One, 2012
Computers & Structures, 2003
2017 International Conference on High Performance Computing & Simulation (HPCS), 2017