Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2009
Space software systems are usually employed in several space missions repetitively and, as a consequence, have a long life-cycle. These legacy systems are still being employed in important space projects and, in most of the cases, were designed using old fashioned structured analysis techniques and aged development platform. However, they cannot be overlooked. This paper describes an ongoing work at the Institute of Aeronautics and Space-IAE to conduct a process for updating legacy space software systems, considering a balanced approach when employing new technologies, still keeping traceability with the old models, even thought different techniques are applied. This transition aims not only to update the software but also incorporate new requirements derived from new space mission goals. Considering that technologies related to such software systems are in continuous progress, this initiative has two main benefits: bringing to the legacy systems and space projects technological innovations that can facilitate and improve their maintenance process, and keeping active systems that have proven to be cost effective and reliable. A case study was conducted using part of a flight control software system whereas old models were revised to reflect new requirements and new models were elaborated to complement the old ones. As a result, new tools and techniques could be used to improve the understanding of the software system, and to bring advances for the verification and validation process.
2021
Due to the autonomous nature of spacecraft, on-board devices feature relatively complex functionality in their software, including interface(s) that provide telecommands and telemetry. Once in space, the devices are physically inaccessible and even a small defect can cause a major failure from which recovery is impossible. To prevent defects with such consequences, the development of on-board software for space applications requires a substantial investment in quality and additional manual work results in increased cost. Therefore, it is necessary to consider opportunities that reduce manual work while minimizing the impact in quality. This paper presents the particular solution adopted for the GNSSaS mission (developed by the NSSTC) and generalizes it to propose a framework for the development of on-board software for small satellites. It evaluates different software components involved in a mission and attempts to identify all opportunities to use data modelling and automated code generation to support the development, validation and operations of software for both space and ground segments. The proposed opportunities are described in detail and their impacts are discussed in terms of quality and cost. As in previous works, the code generation from interface data models were explored and additional opportunities were implemented as well. As it is shown in the GNSSaS satellite, it is possible to develop feature rich, complex and flexible software in-time, in-quality and in-budget with a relatively small team.
2010
Developing software for high-dependable space applications and systems is a formidable task. With new political and market pressures on the space industry to deliver more software at a lower cost, optimization of their methods and standards need to be investigated. The industry has to follow standards that strictly set quality goals and prescribes engineering processes and methods to fulfill them. The overall goal of this study is to evaluate if current use of the standards from the European Cooperation for Space Standardization (ECSS) is cost efficient and if there are ways to make the process leaner while still maintaining quality and to analyze if their verification and validation (V&V) activities can be optimized.
SpaceOps 2008 Conference, 2008
This paper presents an Independent Software Verification and Validation process that applies reviews for verification and a systematic testing methodology to guide validation. This process was applied to a pilot project named Quality Software Embedded in Space Missions (QSEE) at INPE and pointed very good results. The main feature of the process is that it uses a particular testing methodology named CoFI and an automatic test cases generation tool based in state-models. These features allowed systematizing validation activities which were carried on by a team not involved with the software development. The main activities of the process, the results in terms of the errors found not only through the reviews but also through the tests are presented. Lessons learned including drawbacks and benefits are discussed as well.
ACM SIGSOFT Software Engineering Notes, 1994
The ESSDE Reference Facility Project, whose goal is to provide a uniform, open environment for software development at the European Space Agency (ESA), has just completed the architectural design phase. A software engineering environment based upon the Portable Common Tool Environment (PCTE) interfaces has been specified, including a complete data model supporting all activities and products in the ESA standard software development life cycle. Several issues of much current interest have been addressed including scalability, configurability and the integration of commercial tools into an existing framework.
Synthesis of Embedded Software, 2010
Software for space applications has special requirements in terms of reliability and dependability and the verification & validation activities (VAs) of these systems often account for more than 50% of the develop-ment effort. The industry is also faced with political and market pressure to deliver software faster and cheaper. Thus new ways are needed to optimize these activities so that high quality can be retained even with reduced costs and effort. Here we present a framework for the management and optimization of verification & validation activities (VAMOS). An initial evaluation of the framework based on historical data as well as data extracted with a new tool has been done and are described briefly.
2002
Over the years, the complexity of space missions has dramatically increased with more of the critical aspects of a spacecraft's design being implemented in software. With the added functionality and performance required by the software to meet system requirements, the robustness of the software must be upheld. Traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development in this growing environment. It is becoming increasingly difficult to establish traditional software validation practices that confidently confirm the robustness of the design in balance with cost and schedule needs of the project. As a result model checking is emerging as a powerful validation technique for mission critical software. Model Checking conducts an exhaustive exploration of all possible behaviors of a software system design and as such can be used to detect defects in designs that are typically difficult to discover with conventional testing approaches.
Software and Systems Modeling, 2021
The development process of on-board software applications can benefit from model-driven engineering techniques. Model validation and model transformations can be applied to drive the activities of specification, requirements definition, and system-level validation and verification according to the space software engineering standards ECSS-E-ST-40 and ECSS-Q-ST-80. This paper presents a model-driven approach to completing these activities by avoiding inconsistencies between the documents that support them and providing the ability to automatically generate the system-level validation tests that are run on the Ground Support Equipment and the matrices required to complete the software verification. A demonstrator of the approach has been built using as a proof of concept a subset of the functionality of the software of the control unit of the Energetic Particle Detector instrument on-board Solar Orbiter.
Information and Software Technology, 1995
This paper integrates the premise that current software level practices within the aerospace industry are weak and that there is a lack of rigour in both technical and managerial areas. Results from a survey of practitioners are presented that indicate a lack of information interchange exists and that the use of formal techniques is limited. The paper proposes that this is indicative of poor life-cycle practices and that more rigorous methodologies, ones that integrate formal methods with quality practices, are required. A two-level model is proposed to address the issue.
2015
This paper describes the approach used by Oerl ikon Aerospace since 1993 to define and implement software and systems engineering processes: First, the steps taken to assess and define a software process are described using the Software Engineering
2011 Aerospace Conference, 2011
The NASA Independent Verification and Validation (IV&V) Facility objective is to identify potential defects in flight software using independent analysis techniques. This paper describes the tailored IV&V techniques that have been developed in support of critical interactions on the Mars Science Laboratory (MSL) project, scheduled to launch in November, 2011. The IV&V techniques for interface analysis use independently developed sequence diagrams of critical scenarios. The results from these analyses have had a positive impact on the requirements flow down, consistency amongst MSL requirements and identification of missing requirements. The results of these analyses and the positive impact to the MSL project are provided.
This paper reports results of the ESA MARVELS study (Model-based approach research for the verification enhancement across the lifecycle of a space system), with the objectives to define adequate model-based methods to improve the overall verification process of space systems, and to define, prototype and integrate supporting tools for System Verification along the entire project life-cycle. The study results aim to demonstrate how it is possible, in the near term, to perform a transition to a full Model Based System Engineering (MBSE) approach for the verification process, including re-use of elements from past projects, and definition of new ways to support a more effective review process. The feasibility of shifting to this innovative approach in the short term is proven by the definition, prototyping and demonstration of a suitable methodology and environment to support the activities of verification managers and practitioners from different industrial levels, along the lifecycle.
2010
The Flight Production Process (FPP) Re-engineering project has established a Model-Based Systems Engineering (MBSE) methodology and the technological infrastructure for the design and development of a reference, product-line architecture as well as an integrated workflow model for the Mission Operations System (MOS) for human space exploration missions at NASA Johnson Space Center. The design and architectural artifacts have been developed based on the expertise and knowledge of numerous Subject Matter Experts (SMEs). The technological infrastructure developed by the FPP Re-engineering project has enabled the structured collection and integration of this knowledge and further provides simulation and analysis capabilities for optimization purposes. A key strength of this strategy has been the judicious combination of COTS products with custom coding. The lean management approach that has led to the success of this project is based on having a strong vision for the whole lifecycle of ...
This paper presents a model based methodology that relies on the sound basis of the most recent and widespread applicable system engineering standards and model based practices, The methodology has been defined to support domain specific space system engineering standards and practices and assessed through the application on industrial case studies. A complementary formal verification approach has also been experimented.
2009
Developing software for high-dependable space applications and systems is a formidable task. With new political and market pressures on the space industry to deliver more software at a lower cost, optimization of their methods and standards need to be investigated. The industry has to follow standards that strictly sets quality goals and prescribes engineering processes and methods to fulfill them. The overall goal of this study is to evaluate if current use of ECSS standards is cost efficient and if there are ways to make the process leaner while still maintaining the quality and to analyze if their V&V activities can be optimized. This paper presents results from two industrial case studies of companies in the European space industry that are following ECSS standards and have various V&V activities. The case studies reported here focused on how the ECSS standards were used by the companies and how that affected their processes and how their V&V activities can be optimized.
SpaceOps 2012 Conference, 2012
There is a big semantic "gap" between textual information spread into the many documents (space system manuals, etc.) used in operations and what is really produced (software, hardware, procedures, spacecraft database, etc.) and used for validation (simulators, test beds, failure analysis tools, etc.). Operational user is "taken between" a huge amount of documentation and the very low level information spread into many different specialized formalisms and tools. It is very difficult to get quickly the information necessary to understand how the system works, which is a key point for many operation tasks like the design of operational procedures, alarm or error analysis, etc. A formalized model that captures the system knowledge would not only help designers and operators to catch it much more efficiently rather than in documents, but this may also be used by the computer to ease the many verification and validation tasks to be done, and may enable to keep a more efficient traceability with requirements during the whole system life-cycle. Some attempts to go towards this formalized, global and centralized system view have already been made, for instance with the "Space System Model" from the European ECSS-E-ST-70-31 standard. However, it needs to be much more enriched to take into account all the different views of the system and obtain all the expected benefits. In this paper, we present the results of a CNES R&T study called EGPO done with the help of ATOS Company to go further in this direction, initiating a richer view of the space system, and formalizing it with a customization of the SysML standard to the space domain.
Dynamics of Long-Life Assets, 2017
This chapter describes the Space cluster use case using the innovative Space Tug project as an example. It provides an overview of the objectives (customer in the loop, quicker technical response) and related methods to support foreseen improvements through a dedicated toolchain. The IT infrastructure used for the demonstration is used as an enabling and demonstrative system with a focus on modelling and collaboration aspects, as outlined in Chapter "Extending the System Model", on the flow of information, and on tool infrastructure and project costs. Descriptions of the developed tools are as follows: • A web-based toolchain that includes functional analysis, discipline analysis, 3D modelling and virtual reality for project team collaboration. • A workflow manager for collaboration between different companies. • Small devices called 'probes' to ensure security and data protection in intercompany collaboration. • A configurable customer front-end to ensure that the customer remains informed.
Innovations in Systems and Software Engineering, 2005
2016 IEEE International Conference on Software Quality, Reliability and Security (QRS), 2016
In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.
Proc. of the Int. Space …, 2009
This paper reports the results of an ESA funded project on the use of abstract interpretation to validate critical real-time embedded space software. Abstract interpretation is industrially used since several years, especially for the validation of the Ariane 5 launcher. However, the limitations of the tools used so far prevented a wider deployment. Astrium Space Transportation, CEA, and ENS have analyzed the performances of two recent tools on a case study extracted from the safety software of the ATV: -ASTRÉE, developed by ENS and CNRS, to check for run-time errors, -FLUCTUAT, developed by CEA, to analyse the accuracy of numerical computations. The conclusion of the study is that the performance of this new generation of tools has dramatically increased (no false alarms and fine analysis of numerical precision).
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.