0% found this document useful (0 votes)
18 views235 pages

SSEO Sbobine

The document provides an overview of space system engineering, detailing the phases of project design, implementation, and management, as well as the breakdown of space and ground segments. It emphasizes the importance of understanding requirements, optimizing design alternatives, and ensuring technology readiness levels for successful mission execution. Additionally, it highlights the need for continuous support and adaptation throughout the lifecycle of a space mission.

Uploaded by

maria.santangelo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views235 pages

SSEO Sbobine

The document provides an overview of space system engineering, detailing the phases of project design, implementation, and management, as well as the breakdown of space and ground segments. It emphasizes the importance of understanding requirements, optimizing design alternatives, and ensuring technology readiness levels for successful mission execution. Additionally, it highlights the need for continuous support and adaptation throughout the lifecycle of a space mission.

Uploaded by

maria.santangelo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

DOCUMENTO UNICO

1. Space system engineering introduction

1.1 How space mission and system engineering works


Key point: there are a lot of space mission or space system. Therefore, you start at the very
beginning with some crazy and innovative idea (ex: human on Mars, go at the boundary of solar
system) and you transform it in something you can measure (ex: reflectance from UV bands, visible
bands, etc.) → find numbers that tells me how to build the scientific model.
Once I know these rough numbers, I ask myself if there is a physical mechanism so that I can
translate that measurement in what I want.
EXAMPLE: Permittivity allows me to understand how much water is there under the surface =>
I can use a radar/spectrometer to understand where the water is and then built the
platform/service module as a whole.
In this course we will see:
- Which is the configuration
- Which are the resources
- Which are the operations that I must do
- Do I need an authority in the center of mass or the attitude determination and control?
An important information we have to keep in mind is that the job starts when you design the s/c
but continues also after you deliver it → you must be available to support who is operating the
activities in space because you are the only one who knows perfectly what is inside, how the system
is working and you can judge where the system is operating correctly or if there is any anomaly.
Moreover, as a system engineer, I have to trade off size of tanks (led by fuel mass), pressure,
diameters, etc. (some constraints derive from launcher) → I need to understand who is doing what
and which data are he using such that I know what and who will be affected by any modification.
1.2 Space system breakdown

We can define 2 segments:


1) Space segment → includes:
- service module
- payload → could be science (spectrometer, drill, robotic arm, etc.), service,
commercial, GPS transmitter (it can be anything)
2) Ground segment → includes:
- antennas on ground
- constellations → could rely and bound on ground our signal (ex: Iridium, GLONASS,
Orbcomm)
The idea for the future is to create a constellation also for the Moon → built a sort of cluster
that will bounce all the data from the Moon to ground in order to always have a complete
coverage; to do that we need to:
- identify and select the network
- build/select/identify who is going to elaborate the data coming from satellite,
prepare the control and strings to be sent to satellite, managing payloads and
reconfigurations (if any), talking with scientists and engineers who build that to
understand whether the system is behaving nicely or not
OSS: Meanwhile we built the s/c, we built a mirror on Earth (ex: antennas that are going to receive
the signal) so that when satellite is in orbit it is going to talk with us → 2 different entities.
1.3 Project phases
1) System design → we define all the parameter useful to understand and realize the project.
2) System implementation → I write software, I built/buy electronics, etc. (I start having my
physical object).
3) System management → operational phase
1.4 System design
We will understand why you put things in a certain position/configuration, why it has a certain
shape, ect. => always ask why (there is
always a motivation).
Let’s start considering the space
segment design. Why is there a central
cylinder in the spacecraft? Why did you
design it in this way? The answer is
associated to physical and structural
reasons. There are tanks on top which
generate a certain pressure generated,
therefore:
- Due to presence of static and
dynamic loads (which are
challenging), the primary
structure has to be designed
such that it can stand and support these loads them (any kind of load shall be transmitted to
this structure so the interface with the launcher will be this one).
In any spacecraft the diameter of this cylinder will fit perfectly the adapter of the launcher
→ by studying its mechanical load transfer we can identify the correct design => we design
the rest of the spacecraft and we select the launcher based on the dynamics point of view
according to mechanical/structural constraints (starting from diameter of the adapter and
loads you change the launcher, look at the curves in terms Δ𝑣 given, energy given,
inclination, ect.).
- When the s/c does maneuvers it is subject to mass variations (not in a continuous way) →
inertia matrix change significantly (at some point) → affects the attitude and control of the
s/c.
=> Best way to solve it is to align spacecraft with inertia axis → any displacement of center
of mass will stay along 1 axis
It is observable that is not frequent to have all instruments on the horizontal plane (quite massive)
→ vertical plane configuration for launcher allows to avoid low natural frequencies (induced
frequencies of dynamics loads are mainly in launch direction).
Let’s now focus on ground segment design; the ground segment is not only antennas but also users.
It is built during the sizing of the spacecraft → for every mission you have to choose and size the
operation center.
It’s important to keep in mind that the design of the s/c will affect largely what happens next →
during design and implementation phase we need to build a simulator/emulator (model the
dynamics, synthetize the control, numerically model the actuators, ect.) of whole spacecraft.
If we don’t have one, when we start receiving information and data and they are not what we were
expecting, we would not be able to understand where the problem is (if the actuator failed, if the
environment is different, etc.). Instead, if we have the emulator we can feed the data as input and
run the simulation such that we can identify the source of error.
The picture on the side there is representative
of a very general architecture of the operations
(still part of the design). → even at phase 0 we
should decide which elements our system shall
rely on: GPS, satellite constellations, which
ground station will “talk” with the satellite,
which station should handle the payload signal
(scientific data → antenna only for payload) and
which one should handle the signal from
platform (spacecraft → antenna only for s/c).
OSS: Everything you design is associated to a
cost which is driven by the market. Since space projects are very expensive (infrastructures and
personnel 24-7), usually if we are working on a European mission we select European networks and
European launchers (we don’t use NASA facilities because are more expensive).

1.5 Mission design: how it works


While doing the design you have to:
1) Understand clearly what we want to obtain → if we design a great product but it has nothing
to do with the measurements, it is totally useless (functionally speaking)
EXAMPLE: We want to map the water on Mars surface; therefore, we need a specific band
for the radar and we need to be on a specific orbit. If we design a spacecraft with a wrong
orbit and different band for the radar, then the s/c is totally out of scope.
=> We need to set requirements (functional, operation, technical, etc.) → whatever I built
shall withstand the environment (temperature, radiations, dust, inclination, etc.) and the
operational constraints.
EXAMPLE: I have to design a rover, therefore by looking at the requirements and
checking the landing spot inclination, I can decide if the rover needs adjustable legs
or not => requirements are an indication on how to judge whether to have a
configuration or not depending on the environment.
OSS: If I want to land with the orbiter there will be a constraint in communicating because
the vehicle on ground will see ground station only because of relative dynamics => I must
have avionics capable to store all the data we collect until the station is visible (this is still
associated to reliability).
Note that the only data that is given to you is the launch date (because of synchronization of
planet and communication service around earth) → part of requirements.
2) Find alternatives to our design → we have to be sure that all the crazy ideas are on the table
=> important to be a team because everyone can contribute to the project.

EXAMPLE: I want to land on Mars

Option 1: I arrive with an orbiter and I land with that


Option 2: I arrive with an orbiter and when I’m there I release a lander
Option 3: I arrive with an orbiter, I release a lander and then I use a rover → I
don’t need precision landing because I can move around
Therefore, I start building possibilities for my functionalities (I don’t lose opportunities) and
compare them to understand their pros and cons.
To choose the alternative I have to consider at least 3 or 4 optimization criteria, like: mass,
cost (in terms of budget → hardware + personnel), capability of reducing ellipses of
touchdown, power demand, etc. Note that the different criteria must be uncorrelated (if 2
criteria are related I have to keep only one).
EXAMPLE: Mass and size are related → I have to choose either one or the other
(otherwise the comparison is useless).
OSS: Mass and budget are partially correlated in terms of hardware cost but not correlated
in terms of operation cost → they can be used together
At the end I have to come out with numbers that allow me to distinguish between the
different alternatives (static comparison) → I don’t have to do precise computations but I
just need to have an idea of the numbers with a certain range (precise sizing is done later).
One optimization criterion can be complexity: how reliable could the solution be? → the
more you spread the functionalities the more robust you are.
EXAMPLE: If we have a lander and if we crash with it, the rover cannot go, but we still
have the orbiter and=> we can still do science. If we try to land with the lander only
and we crash with it, then the mission is over.
Another important aspect related to the lifecycle of the project is: do we have the right
technologies to satisfy the requirements or do we need to develop something specific?
EXAMPLE: I have an orbiter and I want to land with it. Do we have the technologies
for a heat shield of the size of the entry with an orbiter instead of entering with a
smaller lander? Yes, so they are comparable; no, so I have to develop the technology
→ it requires much time and risks to elongate time of flight.
3) Once we have chosen the mass, the power, the cost, etc., we shall understand if we are
capable to quantify them or we can only judge them just qualitatively. If we have a
formulation that will givew
as a number (it depends on the model) then the solution will be
stronger. If we can characterize it only qualitative, then we have to identify a mapping
between qualitative and quantitative ranking.
4) At this point we have designed the baselines (vehicle, operations, ground, ect.), therefore
we go back and look at what we have selected as important criteria and
requirements/constraints, and we update them according to performance (accuracy, etc.).
OSS: What we have seen at the very high system level (reliability, requirements, etc.) is valid also
for all the subsystems.
2. TECHNOLOGIES AND S/S
Whenever we built a mission (especially an interplanetary one) we should try to put as much
instruments as we can on board and depending on their scope, they should look at something =>
pay attention to the configuration → all elements on board impose some requirements in operation,
timeline, power, positioning, etc.
OSS: If you can, you should try to avoid putting mechanisms on board because they imply failure of
the degrees of freedom → if some dof are locked then there is a problem.
EXAMPLE: Rosetta mission → why is the optic instrumentation in that specific place? For
loads, attitude control, to avoid collusion of the lens (protect lens from any debris).
EXAMPLE: Mars express service module → completely different philosophy with respect to
Rosetta. In this case the key is simplicity (panels almost locked, “cleaner” configuration). It is
characterized by a very large nozzle, used for large maneuvers (enter the sphere of interest,
plane change), which is placed such that it doesn’t impact with adapter (connects to ground)
after release. There are also large antennas, which mass is compensated by the internal
components (otherwise inertia matrix is not balanced) → where to locate them depends on
which is the most demanding situation in terms of attitude control during the lifetime of the
satellite. The electronic scheme has very large degrees of freedom → we can choose to
manage the payload using a completely detached avionics that is related with main s/c, etc.).
It is characterized by a very large nozzle for large maneuvers (enter the sphere of interest,
plane change) which is placed such that it doesn’t impact with adapter (part of the launcher
which connects to ground) after release. Moreover, the Mars Express has large antennas
which mass is compensated by the internal components (otherwise inertia matrix is not
balanced). Where to locate them depends on which is the most demanding situation in terms
of attitude control during the lifetime of the satellite. Its electronic scheme has very large
degrees of freedom → you can choose to manage the payload using a completely detached
avionics that is related with main but not strictly controlled by main pc of the platform => I
distribute functionalities.

2.1 Tech Readiness Levels


It’s important to understand the state of the mission → if we don’t have the technology ready to fly,
we cannot do anything. In general, if we already have the technology then we can design whatever
we want, otherwise, if the request is fundamental, we can stop and create a separate line to develop
the new technology.
EXAMPLE: I cannot communicate the data in high frequency band in a small time. To solve
the problem, I can use optical bands communication (nowadays is used as prototype).
The ESA TRL (tech readiness levels) classification (aligned with ISO) is very important when doing a
design because allows us to define the level of “completeness” of the project → depending on the
TRL level, we can understand if a mission can keep on going or not.
This classification is valid both for hardware and software (core of the spacecraft) and is represented
by a scale from 1 to 9:
1) Basic principles observed and reported (mathematical
formulation) → lowest level of technology readiness.
Scientific research begins to be translated into applied
research and development.
2) Technology concept and/or application formulated
(algorithm) → once basic principles are observed,
practical applications can be invented and R&D started.
Applications are speculative and may be unproven.
3) Analytical and experimental critical function and/or
characteristic proof-of-concept (prototype) → active
research and development is initiated, including
analytical/laboratory studies to validate predictions
regarding the technology.
4) Component and/or breadboard validation in laboratory
environment (alpha-version) → basic technological
components are integrated to establish that they will work together.
5) Component and/or breadboard validation in relevant environment (beta-version) → the
basic technological components are integrated with reasonably realistic supporting
elements so it can be tested in a simulated environment.
6) System/subsystem model or prototype demonstration in a relevant environment (Ground
or Space) (product release) → a representative model or prototype system is tested in a
relevant environment
7) System prototype demonstration in a space environment (early adopted version) → a
prototype system that is near or at the planned operational system.
8) Actual system completed and “flight qualified” through test and demonstration (Ground or
Flight) (general product) → in an actual system, the technology has been proven to work in
its final form and under expected conditions.
9) Actual system “flight proven” through successful mission operations (live product) → the
system incorporating the new technology in its final form has been used under actual
mission conditions.
In order to move from one level to the following one we have to complete the task associated to
the current level.
OSS: From level 5 (included) up to level 9 we have a significant HW/SW implementation.
Usually we select instruments depending on the technologies that are available (if you adopt a 6
then your mission is in 4-5 years at 9) → always test before the actual use (never trust the seller →
always check performance, like accuracy and power required, of the instrument).

22/02/22
Design Process: strongly multidisciplinary & interconnected

Scheme used in facilities in ESA. Specifically, in ESTEC (technology centre near Amsterdam).
Design of a space system. “Space system” is whatever, could be a payload, a rover, a lander, a
constellation…
Let’s see: which are the inputs? What I have to expect from whom? What I have to output? Whom
I have to interact with?
Message: whatever I have to do in a spacecraft design, everything is connected!

Simple example: design very simple spacecraft, a box.


You can face it in very different ways. This is the model for building up
a process in designing a system.
You may start from mission analysis. Mission analysis gives me the
launcher because I selected ∆v needed, the energy needed to be input
in the initial state vector of the vehicle, the orbit selected. Then select
the launcher. Look at launcher manual. I see I have an envelope. I have
the fairing. I have other stages below. Put my spacecraft. I have the
I
adapter, that is not only mechanical but also the electrical interface
with the launcher. This is the physical fairing. If you look at launcher
manual there is also a dot line called “dynamic envelope”. You are allowed to have your system
geometrically staying inside this because of vibration.

Logic: I have a target (could be Mars/ Earth, whatever).

This gives me a ∆energy needed to be put on orbit.


Thanks to that I select launcher from DYNAMIC point of view. Now I look at the launcher and what
happens in terms of mechanical interfaces.
I understand I have ∆V volumetric envelope I can stay in.
Constraint size of the spacecraft. I know limitation maybe 1x3x10 meters. Because I have a limitation
in the three directions.

This is the size I can exploit for external surface of the box.

Box available. Let’s assume you start with no solar panels in wings. But body mounted.
On the outer surface you might have louvers. You can use external surface also for thermal control.
I have my box. External surface. Now who starts solving the problem? The power demand (how
much area needed to be covered by solar cells) or the thermal guy (how much surface needed to
radiate or to be protected, to keep the heat inside the spacecraft). Who is the leader? If I start with
EPS (Electrical Power Subsystem) then I’m going to constraint the TCS (Thermal Control Subsystem).
But if I do the other way around, I may have some issue with respect to the power.
I select a hierarchy (start with mission analysis, propulsion…) according to what I think is the most
relevant element to size, then I have to adapt? This is not an optimal way to deal with your project.
You may lose some very nice alternatives.
➔ There is no hierarchy. In the slide: Bubbles with arrows back and forwards because those
two systems must talk with each other because they have variables in common. Design
variables (for example surface for sure) shall be discussed together, looped up to a common
consent, compromise.
What does concurrent engineering design mean? It means what you see in the grey rectangle. You
can enter each row. Everything influences and is influenced by other domains. Each bubble is a
person. Put them in a room. They fight. “I’m changing my area”. “No, you can’t, I need that area”.
“Oh, let’s put solar panels as wings”. Attitude control “Oh no I don’t want inertia to increase”. Try.
Configuration “Oh no, I cannot enter the launcher”. Go to mission analysis “change the launcher”.
System engineer is the leader. He has to know what happens in each subsystem, which are driving
parameters, constraints, technology. He has to lead the discussion.
Violet: first quantities, most obvious and trivial criteria. You might have alternatives from outside
(orbiter + lander + rover/ orbiter + lander/ orbiter) or output of discussion to have more alternatives
to be compared.
First criteria you might use: mass. Dry mass is one stuff, fuel mass is another stuff.
Dry mass: hardware you select, way you size your system. Fuel mass means selection on propulsion
technology but also strategy to control dynamics. You may have alternatives with the same platform
design. So, you split and then get the wet mass.

In ESA/ NASA/ each company.


From outside what do you need? Which are the goals? Which is the environment? You shall know
what you have to face, which are the challenges. It is different to stay around Earth with respect to
Jupiter because of radiation environment, so trapped ionized particles, is completely different. If
you use electronics you use on Earth on Jupiter, your spacecraft will die after two days. You shall be
conscious of where you want to go, for how long, what you have to do.

Criteria might come from external. Example on going now:


Typically, you might select technology. According to lifetime of the mission, objective, design, you
identify technology RL of components on board. Or obligated from your costumers to use a specific
technology because you want to sell/ promote that technology. Mission to Mars: Canadian side
want to use their own radar technology. No way to change it. Even if it is technology for Earth. Not
necessarily suitable for Mars. They pay. Ask yourself what you can do.
Important is the output.
This is the very first level of design, design phase. No detailed design up to implementing, building
the spacecraft.
You arrive to a baseline.
Visited my search space about what I can do. Which kind of solutions may I consider? I compared
them, I traded them off according to the criteria that I selected. Then I come to my client saying
“This for me is the best to do. Now let’s start doing the detailed design”.

Output:
Design of the mission. Spacecraft design and configuration considered together.
Selection of the launcher. Means schedule and costs. You cannot launch whenever you want. There
is availability of launchers. Apart from launch window because of the dynamics, you also need the
launcher available. Not so obvious that is compliant with launch window obtained from mission
analysis.
Identify which risks you have. How much reliable you are. There is a way to manage the risk. You
shall show you have at least low risk you can manage through testing, verification, though other
mission before yours that checked the technology. For example, before landing on Mars there were
a lot of missions testing the capability to enter, descend and land on another surface before sending
a real rover to go around… You may build a program, so you design and implement one mission
after the other, each of them dedicated to a specific technology to get to the final. Way to de-risk a
mission as well. Not just testing and using very high TRL for the technology.
Then you have to identify which is the overall cost. Cost is operations, hardware, personal. It means
for example if you want to de-risk a mission, so do a lot of tests on ground. You want to build many
replicas of the satellite to do structural test, thermal test, guidance and navigation test because you
have proximity manoeuvres... for each of them you build a dummy satellite. I need personal to stay
in the lab doing tests. I have to pay them, I have to pay the facilities that might be not mine. But in
this way I de-risk the mission. I have to trade off. Do I select to risk about some technology and just
launch without test on ground? Or I spend more time de-risking the mission but spending the more
on ground. Done by manager at the end, but lean on your evaluation. You must be sure about what
you propose, motivate, justify what you propose.
Then of course simulation. It means you output models which allow you to build a complete
emulator of the satellite. If you have your satellite in flight and you want to check the data you
receive or prepare for commands, to cope with any failure you may have on board, you can run a
twin, an image of whatever is in flight through a software emulator, maybe with some hardware in
if you have it and mimic what is going to happen before commanding, intervening.

Tables in the documentation online.

Tables in the upper part of the slide.


TECHNOLOGIES & PROJECT DEVELOPMENT
On the left: ESA/ NASA scale for TLR. On the right: ISO scale for technology.
The relevant point is: former scale, the one on the left, has been split in two.
The 5 and the 6. Important threshold to say “ok, I can start thinking of flying that spacecraft,
technology, component or still I have to do something?”. Idea: let’s simplifying the threshold that
used to be the 6 even to the 5. So, if you are at TRL 5 or 6 it’s ok to precede. Difference: I must
implement my object, component.
Example: net designed to capture Envisat was supposed to be 60 meters edge. Of course, 60 meters
with 100 length of tether to catch Envisat in the atmosphere is impossible to be implemented and
tested on ground for sure and on orbit in a parabolic flight (that is a way you test in a relevant
environment, space-like environment, significant environment). They could never reach TLR 6. Now,
with new regulation, it becomes acceptable doing the same with a scaled breadboard. Parabolic
flight with a scaled net up to 1 meter which dynamic similarity of course, credible and valid from
dynamic point of view, so the elastic, damping, impact and all the stuff multi-body flexible elements
ask for. It was certified to be ready for flight.
Important, to make decision. I’ll put a test in the test, development plan for the technology on
something scaled. It’s accepted as being ok for starting to implement for flight.
Bottom chart:

Phases?

Way a space project is standardized, valid worldwide.


In the brackets there are milestones. Points where you have specific results in your design process
you have to deliver to show you have finished that phase and you can jump in the next.
On the right: what they mean
We are going to do reverse engineering. Start from phase B already done and run from phase B to
phase 0 to understand what has been done.
Phase A →PRR: Preliminary Requirements Review. It means you have a clear idea of what you need
to design. You did your analysis, selected a baseline, you have a backup and an idea of what your
system shall do.
Phase B1 → SRR: System Requirements Review. It means what you have preliminary in mind in the
phase A, now here is fixed, written in stones, nobody touches it and you start designing in detail
with phase C afterwards.

It depends on the mission, in general each phase lasts no more than one year. Phase A, B, C, D.
In red: you end the design, that is phase B, with necessarily TRL 6/5.
If you don’t get there, agency/ client asks you to jump back and keep looping until TLR 5/6 reached.
It could be just one component, not the whole spacecraft.

Read documents. You must know where information stays.


Now let’s enter in details of the bubble seen before. To be conscious which are inputs, interfaces
you’re expected to deal with, which are the outputs. Useful when design. Reported in sequence but
there is no hierarchy. It isn’t the mission analysis that leads the mission. Usually, it starts first just to
understand some numbers, for example distance from the sun, distance from the Earth, that is
relevant for the telecom, for the power, for the thermal… it’s needed to start but it stays i n loop
with the rest. It isn’t something you start and close before the others work.
There is no hierarchy in the schema generally speaking. It might be the case that analyzing a mission
you realize that ok everything could be done in parallel, but there is one (or more) aspect in the
mission that is the most critical. You shall start thinking, discussing, brainstorming around that
because otherwise you wouldn’t solve afterwards.
For example. Mission to Mercury or Venus. Any evident first point, issue to solve? Critical in
designing that vehicle with respect to one around the earth?

“Vicinity to the sun”


Any aspect may be an opportunity or a drawback. Example: perturbations. Annoying stuff. But then
discover they are an opportunity to control the spacecraft for free.
So, vicinity to the sun could be an opportunity or a nightmare. Of course, if it is an opportunity it is
not the first point in your list of course. You can leave it at the bottom and constraint it at the most,
because there are no difficulties. Instead, if it is a drama you start with that. Radiations and
temperature. While EPS that could be an advantage but keep in mind solar cells are affected by
temperature. So, the higher the temperature is, the less the efficiency is.
I know I have to go to Venus. At first I’ll put brainstorming about thermal protection and thermal
environment understanding. Second line: on board data handling, for radiations, so electronics. I’ll
start solving those and constraints about those will lead the rest.
Scheme the same. But some highlight. It may be if I don’t start from them, I can’t solve it at the end
of the story. You do in team.
Other way around: going to Pluto. Hierarchy: other way around in terms of thermal. Also, TMTC to
work with. Propulsion also. It might be the case you don’t have catalytic beds that stay alive and
active such a long time. Also valves, maybe don’t work anymore. Electric power for energy from sun.
Bubbles from which you want to start. Decision about hierarchy is done by you, together.

Process flow: building blocks I/O in early phases System Design

• MISSION ANALYSIS

For sure the MA gives inputs and constraints. But it’s also an opportunity for many other
subsystems. Depending on the trajectory you have distance from the Earth and the visibility window
for ground stations you want to talk with. You have the distance from the Sun for radiations, thermal
power. You can shape a bit. Ok, if you have to go to Saturn you have to go to Saturn. But you can
shape controlling with a low, continuous thrust up to when you’re in the inner part of solar system
nearby the Sun so you have power for electric propulsion and afterwards just stay in ballistic.
Degrees of freedom in the MA that will affect other system, you can discuss together about.
This is about trajectory. But there are many other phases. Launch phase, landing phase, relative
dynamics, so plenty of involvements.
Expected as outputs from MA:
- Of course fixing the orbit
- One of the main budgets: ∆v. You give to the propulsion expert the table with all the ∆v-s
you need in terms of quantity and margin. Keep in mind in the whole life cycle of design (so
from when you start the concept up to when you put vehicle on the launcher) you shall put
in your numbers margins. Margins are standardized. There is a document that specifies
which is the percentage to margin to put, quantify, depending on the phase.
Example: orbit selection. You might do simple calculation with Keplerian dynamics. This gives the
first loop of ∆v. It makes sense so the propulsion guy can start understanding which kind to solution
adopt, the size of the tanks, looking weather those tanks stay in the launcher you selected because
of configuration and so on. Of course, this won’t be the real situation. You have to consi der internal
and external perturbations. At that point you give ∆v + 10% of margin, depending on the kind of
manoeuvre you’re dealing with. Change of plane? Sphere of influence entrance? Correction
manoeuvre in plane? In this case you may give only 5%. Not your choice. Standardized. When you
jump in a more detailed model. Maybe use real ephemerides. Margins can be lowered up to 5%, no
more the 10%.
Margins are fundamental in any kind of component and system you select! Another example: you
selected monopropellant hydrazine thruster existing and sold by Airbus because it has thrust,
temperature, total impulse that are ok with what you need with respect to manoeuvres you need
to do. But in your mass budget you put margin also on that. Impulse, thrust, monopropellant kind
of fuel, this will be mass and power demand. On those values I put 10% of margins. Values on
datasheet of that thruster. They tell me object is this one, they weighted it… why do I put margins?
The point is: to keep a bit of flexibility. So you have to stay in that category but not blocked on that.
On the other side: at the very end when you buy all the stuff you will weight. There will be
uncertainties on the component you bought (not the one on the datasheet). You might have sized
your system so that envelope allows this uncertainty on the components you’re putting on board.
This margin on the mass will be lowered up to zero when you launch. For the power and for the data
budgets those margins will never die, not even at launch. In the Electrical Power margin on the
power system can provide because environment is not perfectly known. You keep some reliability,
robustness in the system. For some quantities the standardized margin disappears before launch,
others don’t disappear. Data is the same. You need a buffer of capability that is well above data you
expect to manage for safety. Important! You cannot erase margins because you cannot satisfy
constraints. For example, you made your system design. Your mass is above the launchable mass.
But you know you have 20% (standardized) margin on the overall system mass. The tendency, trend
is: ok I put 20% more of mass for safety. But this mass doesn’t exist. Why not remove this margin so
I can fit my design in the launcher I selected? The answer is NO! Because my spacecraft will become
fatter and fatter when I’ll build it. I have to protect the margin. Design shall not touch margin at all
at least up to phase C. (kill your mates with this trend).

Another important step from MA:


- To give any eclipses. Eclipses are classical night-time but also other kind of eclipses. So object
or moon you want to look at covered by the main body. Then, it is not eclipse but non
availability also for example if you are on Mars and want to transmit to Earth. You have to
avoid opposition situations in which you have Mars – Sun – Earth. In this condition of ∆ true
anomaly any kind of transmission will be highly disturbed by temperature of the sun. It’s MA
that gives this information to TMTC saying “From this month to this month you cannot
transmit because of sun in the lobe” so up to OBDH having a memory for taking those data
on board or not acquiring scientific data, so manage. It’s MA that mention that. You may
change launch date to avoid blinding situations.

• PROPULSION
Tasks of the propulsion: first of all to get information from MA about ∆v-s or on the other way
around. Remember: bidirectional arrows. It may be the case you say “this is the size of my vehicle.
The maximum tank I can put in the main cylinder is that one. So max fuel you can have is this. Now
it’s up to MA. You can control only with that mass of fuel, do a miracle”. Generally, not the case. But
keep in mind this flexibility.
Propulsion: input also from ADCS. You can also use thrusters. For desaturation or control. Propulsion
has both primary and secondary propulsion to be designed and motivated, define architecture also.
Not only the kind. Define which kind of propulsion. How many thrusters. How many tanks. Which
kind of pressurization you select for the tanks. How you distribute those tanks in the architecture.
The lines in terms of reducing the pressure, having the relief valves, having the sensors...
It is quite affected and related with the ENVIRONMENT: temperature, lifetime of the systems,
electric availability for electric thruster (requires lot of electric power), and CONFIGURATION.
Yesterday we saw where you put nozzle for primary propulsion. Now look at where thrust for
attitude control are located. With quite large leverage, so you can have the authority you want and
also reduce number just inclining them in 45° direction so that you can intervene on one axis, for
example yaw axis, and also pitch axis.

What you’re asked to output:


- Budget of course. Almost all subsystems have budget to output. In this case the fuel budget.
Mass budget and power budget if any required.
With margins again.
Note on the environment. Bombarded with green propellant… regarding TRL nowadays nothing
ready to fly in classical spacecraft for green propellant. Lot of activiti es. If mission is going to be
launched very far away in time of it could be considered, through a test and development plan.
Another note on the fuel you select. If you have to land both in airless body or body with
atmosphere. Any note?

Aerobraking and aerocapture can be considered.


You can do aerobraking with no change in the configuration, just using your spacecraft. You just stay
at very high layers of atmosphere so thermal interaction is not so heavy.
Aerocapture on the other way around jump you from an open orbit to a close orbit. From hyperbola
to ellipse, so interaction is quite heavy and you shall have a shield. You’re changing the
configuration. In this case you have to trade off the mass of the shield with respect to propulsion,
fuel mass you need to do the same closing orbit at pericentre of hyperbola.
While for aerobraking you do when you are already closed, just to lower the energy, so it isn’t a
matter of trade off on the mass, but matter of trading off on the time, that actually is on the mass
as well. Because if you’re elongating your mission (look at Venera 1 (maybe) and TGO mission
aerobraking, how long they took to do that). You stay in order of months to reduce your orbit.
Waterfall to save fuel mass? You might save fuel mass but ask this mass to do something else.
Correlation: aging. You stay longer, 6 months/ 1 year more in space environment and your solar
cells decrease with a percentage of 3,4 % in degradation of efficiency. This means you have to
enlarge solar panel of a bit to have same power. Adding mass elongating the mission because you’re
saving propulsion doing aerobraking. Typically to do aerobraking you use your wings for tuning, a
sort of aerodynamic control. Interaction with atmosphere may (up to you to control) interact with
cover of solar cells and ask for a thicker cover. So it seems you’re saving something but at the end
you’re not saving something. It’s up to you to make rapid calculations. Not very refined design.
Computations in few hours just to say it makes sense.
What about before touching down? Any aspect to keep in mind because of the propulsion using
retro rockets for landing softly?

Arrive with jets. Plume on the surface. You typically pull over dust.
To understand whether these effects (depending on thrusters, on gravity) are going to affect any
mechanism you may have near the surface. Mechanism to deploy the legs, mechanism for the thrust
vector control if gimbal nozzle…
Other one: science. You land not for doing disco. But to do science in that location. If you don’t have
mobility in around 1/2/3 meters you are going to do something. In-situ science. You’ll collect
samples from the surface or from beneath with drill or scoop and you’ll analyse the soil. The risk is
that you’ll pollute the soil with exhausted gases. So, what you’re detecting from the surface is your
exhausted gases or any chemical build up because of exhausted gases. This should be avoided!
You have to understand. There is no possibility for chemical reaction? Ok they are inert, I’ll find
some hydrogen or nitrogen, it’s perfectly known it’s mine, it’s a noise I can remove nicely. Or I have
to understand the decay time. I know I’m polluting, I’ll wait and then do my sampling. Or build
robotics so that I overcome the zone that I expect to be affected by the divergence of the plume on
the soil. Quite important. Otherwise, you jeopardize completely the mission. Reason why
sometimes you use airbags or sky crane. Other solutions to have no effect on the soil.

• TELECOM TT&C

Telecom means three stuffs. It means Tracking, Telemetry and Control.


Tracking means (less true for Earth satellite, true for interplanetary). The state vector of centre of
mass build thanks to radio connections. RF connections you have with ground. With doppler, with
range and range rate, with delta door, that is a difference in time of getting the signal back…
There is a functionality that is: I use RF to build up and update my position in space.
Then I have to receive my telemetry. Telemetry is all the house keeping data, so pressure,
temperature, voltage, current… from all sensors I have on the platform. Low rate typically. Then I
have scientific data that means payloads: imagining, whatever data my spectrometers are giving…
typically another channel
Then I need a link upwards. So for failures or for nominal updating of what to do. So to upload a
bunch of new planning for next week, next month, short/medium/long terms for manoeuvres and
so on.

Inputs for that:


- For sure what I have on board
- Where I am. Distance, blinding from the sun, periods in which I cannot communicate
- The power I have available for
- I have to talk with attitude. Because if I have directional antenna I have to point to the
station, not just a matter of visibility. If I see the earth but antenna in another direction...
rosetta added a dof to talk with Eart freely, had a gimbal.
- OBDH. If I cannot communicate I need huge memory on board. Or if I don’t want a big buffer
on board make decision not to get data. So size memory on when I see the ground and I can
download the date.
This three has to talk. Who is leading? Depends on the mission.

Output?
- Architecture. Design. From antenna to amplifier to modulator, as a scheme, not selecting
the whole component. You don’t have just one antenna typically. You have phases in your
mission. When you’re tumbling at release at launch you cannot have a directional antenna.
You need omnidirectional antenna so whatever is your attitude you can send signal.
On the other way around when you’re doing science and have a lot of data you must direct
your beam because you have to save power to send a lot of data. So, as for propulsion, you
have different architectures on board. Depending on the functionalities and depending on
the phase. Identify phases, best solution for the phase, sizing variables.
- LINK budget. Is the evaluation of the signal to noise with respect to architecture you selected.
You selected antenna on board, you selected antenna on ground, you selected electrical
power you can use for transmitting, you know the environment and so the losses, noise
included, so you can tell the level of power your receiving element will get with respect to
noise that is produced.

• THERMAL CONTROL TCS


Only mechanism we have in space: conduction and radiations, no convection. Apart from very few
situations like bodies with atmosphere and inhabited module as ISS.
Engineer shall be always stingy (tirchi). Do not spend resources with no reason. Thermal control
classical example. You can warm up a system with heaters, you can cool down with Peltier system.
But these ask for electric power. And classically a lot.
TC classical example whether understand clever or stupid engineer. Exploit at most characteristics
of material with respect to electromagnetic bands to emit, absorb, reflect so that you have your
system doing what you want passively. I must be passive, asking no power. Play with paintings,
coverage with any kind or material for external surface… it is called thermal analysis and thermal
control definition. Let’s select the material at the best so that I have hottest spot to radiate at most,
coldest to keep whatever is produced as a waste energy in and balance it.

- Thermal budget to output of course


Inputs: all components you have on board ask you for an interval of temperature that is admitted,
cold and hot maximum when it is operational and when it is in standby, you must design the system
so that each component stay in that. Thermal budget show this. That building up the system with
white paintings, louvers, thermal blankets, these located in the system as needed, you keep all the
components represented in the model, so all the nodes in the interval that is needed.
TCS is another example of mentioned margin. When design a system add at least 15 K more. If you
have a maximum temperature admitted by an object you must decrease it 15 K. and be more rigid
with respect to hottest admitted temperature. You shall do the same with coldest. Add 15 K so that
you are quite margined. This is really a hard job to be done. For example if you have a battery that
stays between 0 and 40°C at the end you have 10 degrees to play with. It’s a nightmare. Also
cameras, electronics… that again stay in -20, + 40°C.
Interfaces, who you have to talk with.
- MA.
- Attitude because it depends. If you have a very nice surface that emits at the most then your
attitude guy put this surface in front of the sun and unfortunately the absorptivity in the high
frequency bands of electromagnetic spectrum is high, you got nothing. You must discuss and
arrive to a compromise with pointing as well
- Configuration obvious. You might impose not to be a cube but prismatic shape for external
environment because you need a surface more to radiate or to attach a deployable radiator.
Simulation comes easily. But which parameters? Be flexible, feasible, compliant the whole is up to
you. Thermal ask you to interact with others quite a lot
Another critical point: to understand which is the generated heat because of components you have
on boars. Thermal energy comes from last level of conversion of energy. Whole electrical power you
generate from the sun at some point will become heat. It’s up to you to understand which part of
this stays inside the spacecraft and in charge of TCS and which part goes out from the spacecraft.
For example, if you give energy to the catalytic bed for monopropellant or give energy to va lves of
propulsion subsystem, is energy that somehow will get out with the thrust. Also the heat part of this
energy! That part of electric power you give to propulsion includes wasted energy but gets out of
the system and is not in charge of TCS.
Electronic is different. If you give electric power to a PC, all will be transformed in heat. So if you
have 5 watts for a PC, this 5 watts has to be computed in the thermal balance. Aspect in your hands
to understand which is the real part spacecraft is generating inside and you have to manage together
with the sun input, planets input and all the rest from environment.

• ATTITUDE DETERMINATION & CONTROL ADCS

Underline a point: DETERMINATION. Valid also for centre of mass, any kind of state vector. It isn’t
only a matter of controlling the state vector (attitude or centre of mass), first you have to determine
that vector. You classically assume to know where you are and design the guidance, where to go.
But this is not the case!! Part of the design: select correct sensor to determine with wanted accuracy
position (attitude and centre of mass), then select control tools according to that. In the attitude
(but also for centre of mass) you need one order of magnitude of difference. If you have an accuracy
in attitude determination, you get as a control capability of one order of magnitude less. If you have
cm in determination you’ll have dm in control. In between you have errors, noise in the sensors,
errors in algorithms, errors in the chain of data fusion. Same for synthesis of control and actuators.
When you select architecture keep this in mind!
Output: you’re asked to design, select, motivate which kind of sensors and actuators you want on
board.
Depending on the phases (as propulsion, TMTC) you’re not asked to have one architecture for all. It
could be that, during the launch and the commissioning (very first phases when you don’t ask for
precise pointing) you might have just (if in light) course sun sensors or earth sensors. Then for the
science phase you need star trackers. Or you just lean on magnetometers. Different sensors
depending on functionalities and phases of the mission. Don’t be shy on putting different
architecture on board. Motivate them. Same for architecture of actuators.
Output: pointing budget. It is what you want to do. You don’t have to start designing and saying “ I
need 12 sun sensors, 16 thrusters, 4 wheels so I’m robust that’s it”. No, you’re not justifying it.
Maybe you don’t need to control each axis. Maybe (rare) you’re not so long time in sunshine, makes
no sense so large number of sun sensors. First of all: identify the problem. In the attitude subsystem
is the pointing budget. You have a table and ask everyone which kind of accuracy they want in
knowing the position and controlling the position.
To the thermal guy to point out of the sun?
To the TMTC guy. beam width of your high gain antenna? What you want in terms of knowledge
and control?
You ask everybody in terms of knowing where I am and controlling where I am. Two different
aspects. Maybe you want to know where you’re pointing, no matter of control. Maybe you just want
to see the sky and knowing where you’re looking at. You might decouple the two!

Budget in accuracy, stability… also for determination, not only control.


Fill the table. Do analysis, understand when those accuracy are required. Then you build your own
pointing budget. That is performance, not requirements. You have a mirror that says “ok, perfect
accuracy for thermal guy, I can do it and this is my output. I cannot do for TMTC so numbers will be
different”. Output: performance of the system not compliant with request. I have to highlight that.
That I’m out of compliance with the request. Check with TMTC it is ok if you’re not 1° accuracy in
determination and 5° in controlling, but you are 5° accuracy in determination and 10° in controlling.
It that ok for the antenna? Do you have to change antenna and enlarge the lobe? Yes/ no and close
the point.

• ON-BOARD DATA HANDLING OBDH

Is like the EPS or the TCS, a service, a resource on board. So, it needs input from everybody. Needs
to know how many data each subsystem has to manage. “Data”. Think about propulsion/ attitude…
Propulsion good example. You have tanks and you want to know pressure and temperature. You
have lines and you want to know pressure along the lines. You have the chamber or the final thruster
and you want to know pressure and temperature. These are sensors. These are data. Data that you
that designed the spacecraft is now in flight want to know. To understand if it’s working nicely or if
you have to intervene. During the design you have to think on what you want to know about the
system reading data coming through telemetry. According to that you’ll put sensors in the design.
How many pressure sensors, thermocouples, ... how frequently do you expect these data are
needed to reconstruct the models and the evolution of the temperature not to fall in Nyquist
problem? So how frequent will be my sampling?
Which is the precision? This for OBDH important. Becomes digitalization, quantisation of data,
number of bits, length of the word, the amount on memory I need to put on board so that all the
sampling you want can be preserved, downloaded on ground, so you can look “my tanks is ok. My
tank is starting to sublimate, I have a problem”. Is that a fast dynamic so I have to sample frequently?
Is that precision needed? It will affect operations and OBDH guy to have amount of data to preserve
on board and to split during the communication windows. Kilo/mega/tera bites shall be
downloaded, maybe in pieces, not necessarily all at once. It depends on the urgency for building the
status of the spacecraft.
So if you’re doing a manoeuvre, such as entering a SOI or a powered gravity assist, it makes sense
to ask for a more frequent sampling of the status of the subsystems because it’s active. Manoeuvre
last 30 minutes? Maybe 1 hertz sampling in that spot of time. During ballasting flight every two/
three days control it is ok.
OBDH: how many memories you want on board, which kind of processor, which is the level of bits I
have to deal with, how to manage with the telecom so having more windows to communicate?
Splitting differently the packets?
OUTPUT: architecture. Do I want redundancy on my computers? Many memories dedicated to the
payload? Specific relevant subsystem? Typically for ADCS. It has a frequency in terms of sampling
and commanding completely different with respect to other subsystems. Typically you have a PC
what is an image of the on-board PC so you have hot redundancy. Whatever happens you can use
both of them. But one of the two dedicated to manage ADCS. So acquisition of sensor data,
outputting to actuators and synthesizing the navigation, so determination of your state, verifying
with the guidance and sending the commands to the actuators. Attitude works more than primary
propulsion, so centre of mass control. Apart the case of electrical, continuous thrust. So typically
dedicate units to it and to payload.

• OPERATIONS

Stays with the telecom and design of overall mission lifetime from launch up to end of life. Mark 3
aspects:
- Phases and modes. You define what system shall do. Which kind of functionalities it answers
and when. If those functionalities are repeated along the lifetime, this is a mode, or it is just
once, this is a phase.
Launch is a phase.
Imagine your system as a long state vector in which you put values for each variable that is
on board (each sensor, each component, each valve…) and that kind of state vector is
acquired just once all over the mission as it is, this is a phase.
Landing, enter of SOI are phases.
Science could be a phase that includes different modes.
While a mode is a peculiar state of the system that is repeated more than once. For example,
you have a slot of time when you communicate with ground. This is a mode. You put attitude
control in a given state so you have antennas aligned. You want the power at a given level
of resource given to the system. This is repeated from time to time
Settle all that. Analyse what you need.
Also attitude has many modes, depending on accuracy you need. When you are detumbling
is a mode, when you are in a safe mode, that you point the sun to get the power of point the
earth for communication because something is wrong is another mode for attitude. That is
I don’t need precise accuracy, I may be rough but I must point sun if I have solar panel.
Specific state vector for whole system

- In operations you define FDIR (Failure Detection Identification Isolation and Recovery). It
means which logic I adopt if I have any anomaly?
Anomaly could be definite failure, so I lose completely a component that doesn’t work
anymore (amplifier in Telecom for example) or temporary failure, for example a valve that
is not opening but trying more and more it opens, just temporal lock. You have to design
“what if”. So how you detect something is a failure? Usually, you can build in the software
just after having the first level of design because you need to know the architecture of the
system, what each component is doing, which functionalities is answering to imagine which
could be the non nominality. Then you identify an alarm.
If I receive a telemetry tank of pressurant is over the nominal of 5° that’s for me is detection
of anomaly. What do I do? Isolation and recovery. I isolate this expected failure pointing
directly the sun and closing all the valves in the line of that tank. Then put for example fix
that I’ll assume an attitude so that that tank is pointing towards the sun of pointing out of
the sun. (depends on if low or high threshold is overcome). This is the logic of operating,
mitigating and recovery, detection of anomaly. You have to put on a table before flying, after
designing, checking many times for validation on ground, so that whenever it happens. Read
a telemetry in this scenario you immediately know what to do. This could already be on the
onboard software. Or asked to intervene through telecommands during next window.
So FDIR needs to talk with whole subsystems. Platform, payloads and of course possibility to
communicate, either to be autonomous or not

24/02/2022
Electrical power generation and storage subsystem is of fundamental importance, without it
nothing else will work.
The power budget is established using a large table with all the subsystems and components that
need power, splitted in modes (the electrical subsystem is involved in different modes). We will see
EPS in details in future.
An example of power budget could be a matrix as follow:
↓Subsystems \ Phases or modes → … Thermal Thrust Science
control

TPITC

ADCS x [Watt]

PS y [Watt]

OBDH
Sum of
Systems power

Then also Total Required Power [Watt] is computed.


x, y are the values in Watt of what I can provide, which is not what is requested (exempli gratia
because of margins).
Once we have the matrix we decide our energy source (solar, chemical, nuclear, ecc…), if we need
a certain current or voltage, size batteries (if needed).
I know thrust is demanding, so I can size a battery in order to use it even during light spots to support
the panels.

We need to know also HOW to connect elements of the system.

What happens with solar panels?

I must choose how to manage different


power availability phases.

When I'm far from the sun I could tilt the solar panels to maximize energy or I can use a r esistance
to use a battery if needed. I have to determine how much power is needed but also in which phase
of the mission (where I am when I need it? what do I have to do?).
Example:

Consider a Satellite + Payload. The power demand is:


- Power demand under sunlight: 500W
- Power demand in eclipse: 300W
But to generate just 500 W during sunlight is not enough, indeed I will need at least 800 W in
order to charge also the battery and so provide energy for eclipse phase.
(if I use a nuclear generator I don’t have this problem).
Batteries are heavy, for this reason power demand in eclipse is reduced. Telecommunications are
done in eclipse because low temperature are good for them.

Configuration: when I have all the subsystems selected I put them together establishing their
requirements. Configuration means shape of subsystems, mass budget and components allocation.
We have to talk with trajectory designer because we must know the pointing to affect positively
visibility from ground station, how you see the sun, visibility with other planets, distance from the
Sun, ecc… moreover we must know the maximum dimensions allowed for the launcher.
The mass budget, at this stage, is the mass for each subsystem onboard, not necessarily for each
component. We shall have a 20% margin.

An engineer is asked to output some estimation numbers for the cost, the risk and development
schedule (activity plan).
If I select the performances of thrusters (which means where I will buy them) I have also to account
for price and risks.

In this phase we are expected to produce an estimation of costs, spacecraft, operations and launch.
Which are the critical points? I must have in mind risks associated to technology (ΤRLs) and indicate
a way to mitigate the risks.

AIV/AIT: plan our activities, from design to end of life of mission (disposal). I must decide what I
want to test, which model to use as test, produce what kind of documentation, ecc…
Lavagna wants us to have the ability of understanding the functionalities just by watching to the
image of a spacecraft.
We will always have the problem of who will pay for our project, so we must be able to persuade
and convince them that our project is really valuable from the revenue point of view.
2 main customers: European agencies (national agencies) and European commission.
A nation pays to enter ESA. ESA receive money from nations and they are assigned to fields decided
by the nation. For this reason the industry should talk to his government and convince it to assign
money to its field. By that way ESA will return back money to develop the industry ideas. European
commission instead is free to manage money as it want, ESA on the other hand will return to each
country the same amount they received.
Ministerial conference: each minister of each member country discuss what ESA will propose in the
next period and on what each state will put the money.
Each country tries to understand in anticipation ESA future intentions in order to develop that
competence and receive ESA money.
Sites are located in isolated locations to be a wheel for economy and for the developing of a
international environment.

Centres:
- EAC (Cologne, Germany): Astronaut Training Center (there are swimming pools, simulators
for low gravity conditions, experiments);
- ESOC (Darmstadt, Germany): MOC (Mission operation Center), to acquire and analyze
telemetry and prepare commands. Here work telecom/electronic/IT engineers;
- ESAC (40km from Madrid, Spain): Astrophysics missions (problems related to plasma and
cosmology);
- ESRIN (Frascati, near Rome, Italy): Scientific Earth Observation Center;
- ECSAT (Harwell, UK): Specialized in samples (sample return, analyze rocks and minerals,
outer planet resources examination).

Ground stations are located in many ways:


- Where there is nothing and at high altitude to have less atmosphere for communications
and less disturbances affecting the signals;
- Many are along the equatorial/tropical regions because in that way I have always an antenna
looking the ecliptic and this is useful for communications with interplanetary probes (for
instance NASA has DSN antennas of 70m diameter, ESA has ESTRACK antennas with 35m
diameter. The usage of the two systems has a different cost but both are used to
communicate with interplanetary probes).
- Ground stations along a meridian are used to monitor LEO SSO (Low Earth Orbit, Sun
Synchronous Orbit) orbits. A SSO is useful because I design the probe to work always in the
same way (illuminating conditions are always the same so we design a simpler thermal
control, power supply, ecc…).

ESA exploits EUMETSAT for metrology, ArianeSpace for launch services and
Eutelsat/Inmarsat for telecommunications.
We put a budget proportional to gross national product.
Then ESA puts money in mandatory fields and optional ones. Each nation push for putting money in
a specific field of the optional ones.
Italy is a huge contributor for ESA. We are specialized in navigation and Earth observation.
Now that Galileo is completed, budget destinated to Navigation will decrease, while money
to Human spaceflight will increase because of partnership with NASA Artemis Missions.
01_03
The organization of the payload is “Christmas tree” form.
International interplanetary mission: instruments are paid by nations, the vehicle is provided by the
ESA.
You put all the instruments onboard because they are paid by the nations and the whole vehicle is
under ESA. If there is a drill in the mission it is paid by Italian Agency, not by ESA.
The data outputted are first property of the nation which possessed the instrument, than there’s
open access to them.

It will be a privilege to say the results of the data of other nation instruments.
Let’s collect the more money we can, so the Exomars mission is a glue mission. Nation are asked to
bring all instruments and payload they want to. The problem is that with all this payload the
spacecraft shall be able to work with all this weight, so the lander shall be designed properly. It
begins a loop problem.

The lander collect samples from different area of the planet.


The samples shall be collected in 2 meters down the surface, this is a requirement. The drill shall go
autonomously 2 mt down the surface.

Requirement:

1) Mandatory: 1.8 m deep


2) Nice to have: 2 m deep

The main problems are:

1) temperature;
2) the fact that it shall work on ice and it can vaporize;
3) the fact tat the drill shall be 2 mt long, but it cannot be a single beam of 2 mt, for many
reason (like the fact that in cannot be bring easily, the mass should be too much);
4) vibration of the drill during the process, because of its length: the longer is the drill the lower
is natural frequency;
5) surface contamination;

Way to solve: reduce free length.


According to the frequency of the lander I select free length, so the vibrating frequency for that will
be almost twice the one imposed by the launcher.
The drill collect the samples, than rotate.
This is a drawer which collects the samples. There is a chain
which automatic delivers the samples to other instruments or
in order to be conserved. The chain allows to address the
samples to other instruments which shall analyze them.
For example if the samples must not present iron in its
composition (requirement), it means that I have to analyze
them to see if it contains iron, for example by an instrument
which uses magnetic field. But it shall not interfere with other
instruments, especially electronic instruments.

The tip of the drill is build up in carbonite-diamond (it reduces aging-erosion). It have to check if the
material can ruin the tip of the beam drill, if it is not possible I move more than one tip and change.

The simulation tests when the cutter shall be changed.


I build a simulation of the Martian soil, in order to test its perforation
in many test with EM (engineering model) of the drill. With this
technique I can verify the edge of the cutter, basing on the transversal
velocity etc.
The purpose is to develop GSE (ground support equipment) for the
mission:

1) translational velocity;
2) duration;
3) characteristic of the soil;
I drill and then i have to remove the drill from the ground. But if it is stuck on the ground?(“non so
come cazzo fare uscire sto trapano dal muro” Prof Lavagna).
Wherever can go wrong I have to solve the problem. The machine is designed to retire the drill with
an amount of newton, but if it is not possible, because of technologic limits, the drill is left there.

I want another drill: multi rod drill.


They build a stuff made of 4 rods, 50 cm each, and then they added one tip more. So if one is stuck
inside, another replace it.
The sequence is this:

The penetration of the soil is made by a sequence of rod. Once one complete the perforation for its
length, the following one replace it and continues the perforation. And so on.
The thickness of the holger(? Non so cosa sia, forse si riferiva alle rod del drill) and its inclination
depends on the soil to be removed.
The test of the simulated soil is very important. We have one drill only, made by 4 rods. Inside the
rod there are many sensors (spectrometers) which study the soil and collect data (making a vertical
soil’s minerals history). You must ensure that there are correct and clean electric connections
supplies and cables for data transfer. This because the environment can contaminate the instrument
(wind, sand, water, etc.). The interface of the contact shall be operative, so clean and correct
operating. A solution can be fiber optic cables.
Functionalities: I have to drill. The drill shall crack soil at given size of particles, so I need something
measuring size of elements.
The drill shall be attached to a vehicle while drilling. Interaction with the rover operations.
Interface: reaction of drill dynamics on the rover even if have different masses.
The rover moves backward thanks to wheels. The wheels are flexible, this helps following the
morphology of the Martian soil. The wheels require brake. They must be tested on a simulation.

Requirement: drill’s velocity monitored. The whole system shall stay fixed while it’s drilling.
Brakes or anchoring systems? The choice is based on criteria like: friction, penetration capacity,
strength, limit thrust and drill. Anchor system implies a very low escape velocity.
While drilling I must ensure that the drill axes is 90° inclined with a maximum difference of 5°. The
relative velocity between the system and the soil must be zero.
The rover has:

1) Retrorockets: push lander on soil. Problem: failed open gas for engine. It was launched in
2004 and landed 10 years later. It remained closed for a long time, then it didn’t work. So a
solution can commissioning activities during flight (like let’s open and close) without being
operational risky.
2) Harpoons: they are used to hook the ground (like pirates with ships). A missile gives the
impulse to enter the soil. The harpoon is trapped in the soil. Problem: software, line wrongly
written, never shot;
3) Leg screw: drilling low velocity while entering in the soil. Sensors helps to analyze soil
strength.

7 marzo 2022
SSEO_L3_Activities in Space _2022 da slide 45

Which missions? Public service


We mention the interplanetary and science mission. Another category is the one for public service,
that is basically Earth observation. When you have remote science keep an eye on this chart.
I’d like you to point the attention left-top,
right-down on the spatial resolution that
means how many pixels you have. Below
the size of a pixel, you cannot distinguish
anything. This is related when we will talk
of different payloads. This is due basically
to the metric that is representative of your
detector. So, the width and the height of
the detector assumed as a metrics (the
same that you have in your digital camera).
You can also have rectangular pixels. So,
this determines the accuracy that you have
in an image that you look at. While of
course that one will identify the maximum coverage that you can have on ground.
The minimum object that you can look at depends on your height, the focal length of the instrument
and the size of the pixel and of the metric that you want to have on ground. But this is not the only
resolution that you are asked by the scientist. Typically, customers ask that want to look at
[something] with the precision of 1 meter for example. Then you also have the radiometric
resolution. That is basically the quantization of the signal. You know that depending on how many
bits you use you will have a different precision and accuracy in the measurements that you want to
do. It is a 6, 8 or 10 bits? When you define that number, you are also saying what kind of precision.
It’s not only the band that you want to catch (visual, IR, microwaves, high energy signal) but also the
resolution of the band that you want to arrive, and this is given by the radiometric resolution. While
the spectral resolution is ok, I want to stay in the IR but the IR has a quite large domain from
nanometers to micrometers length. So which kind of spectra I want to have in my signals, in my
instruments? Because if I have in the near IR, in the WIR or in the thermal IR I will do something
different. I will detect some specific material that absorb that kind of frequencies. This are the kind
of information that you get from the scientist and then you must manage in the mission. The
temporal resolution is how frequently I want to revisit not only this ground track but this strip of
coverage with the FOV of my instrument. I just want to point out this to have in mind that when you
talk about resolution you ask yourself which one of those it is. Do I have data on all of them and
which part of the design is affecting? The graph on the right: on horizontal axis you have the spatial
resolution, so which is the detailed that you want to look at, for example if you are not following
waves of the oceans, you do not need very high spatial resolution.
This means you can either have a lower number of pixels in your matrix, simplify the number of data
to be downloaded or you can stay higher in orbit. Remember that this radiometric resolution may
become a bottleneck or a driver to design your avionics on board memory. It is a number of
terabytes you have to store on board. And also, the kind of telecom you select because the data
rate, the velocity at which you send your data down if you have a very high radiometric and spectral
resolution the telecom for downloading will become for sure a bottleneck or something to look at.
Spatial resolution versus the revisit time. You jump from meters to km and from hours to months.
You see that you might have very high time resolution and reduced spatial resolution. The other
limit in the graph is very high resolution but relaxed in time. Not necessary there is a relation
between this two. Another note on the nomenclature: when you talk about the multi-spectral keep
an eye on the meaning. When you say panchromatic is basically one band (black and white), when
you have the color is the RGB. RGB means a given number of bits per band. You are increasing your
memory, your TMTC on board. Think of the systems that depend on what you put on board. So
multispectral is not hyperspectral. Now the technology is in the multispectral. The hyperspectral is
the future in payload implementation for remote science. The same applied identically whenever
you go and visit a planet.

SSEO_L3_Activities in Space _2022 da slide 54


Which missions? Commercial
Commercials are of course founded by privates; it is for revenue. Data are paid. It is a completely
different market. The world is changing for space economy. Up to now the commercial was telecom
and broadcast basically. What I would like you to have in mind is something like that:
This is a representation that you find in the books of the RF band spectrum. Don’t have in mind that
you can use whatever frequency you want. But all the spectrum is standardized by the ITU

(International Telecom Union). So that you have very specific bands to do specific tasks. In the
example you have reported down-up or only down or only up mean that you can use this kind of
frequencies for signals from the station to the S/C, from the S/C to the station and the ba nds that
you can use. Not the whole that you can select. You have to stay in those slots and you have to ask
for those slots. Example of the Galileo constellation. If you want to launch your mission, you have
to reserve the frequencies that you want to use at least one year before. This means that you have
to design the mission at least one year before and not change the frequencies. You must receive the
ok from union to use that precise frequency and band and if you do not use it in the time framework
you asked for you are basically withdraw. Example: a mission was in a condition of losing the booking
of the frequencies. They launch a dummy satellite with the transponder for that frequency just to
keep it locked and booked.

If you go inside each of those rows there is the dedication of that frequency: up and down, for
amateurs, for military, for ship. You are not free in the design. While designing commercial missions
at first but even for the rest of the mission, one stuff that you have to look at the documentation of
the frequencies that you can adopt. For example, the X band was not allowed for the lunar mission
communication. You cannot use the X band for a satellite if you are not doing earth observation.
The Prisma mission was around Earth but they weren’t doing any observation so the use of X band
was forbidden. High frequencies reduce the power to be transmitted, allows you to have a better
gain. Helps you in some losses that you have. Just to have a driver for the slides.

SSEO-L4-System Engineering_2022
New broad topic: the system engineering. You will learn just doing it. Doing it not only the technical
side but also the programmatic. Holistic view of the whole system from conceptual design up to the
end of life (disposal). Taking into account scheduling, etc.
Who you should be? Really
active thanks to the fact that
have a complete view:
technical and operations.
Make decision whenever you
have alternatives. Usually, you
not become a system
engineering as a first role in a
company. Not only for
alternatives but system
engineer would look at bottle
neck. Provider does not
provide what is expected in
time; it’s up to you to make the decision to change for example the provider.
And from the engineering point of view you are in charge to check continuously the budget as a
system (every mass, every power demand and supply, …). Again, checking that if a person adopted
a given area that area is actually there, exist. Not only in design but in testing. Everything would be
cross check as a whole.
You have to model your costs and provide also an expectation at the beginning and the real cost
breakdown at the very end of the mission that shall be compliant. The most difficult part to do is to
put uncertainties in your programmatic and margins so you are robust with the evolutions of the
process. A lot of variables that do not depend to you. You shall be capable at the very beginning to
margin the system design and implementation. Is that mission built with the risk as a criterion,
reliability, or was that mission built not caring about the risk, to be fast and to be cheap? That could
be the case for Prisma mission but not for Ulysses mission.
Slide 27
There is a standard for Europe, from US, from Japan but now they are converging to the same way
of dealing. We would not see these slides in lessons. To develop a project, you can approach it in
very different ways. You have to design, to provide the physical elements to build the system, to
test it and to put it in operations. This could be done in sequence or could be parallelized or
overlapped a bit depending in what you are designing and depending on the philosophy for system
engineering that you want to adopt. In space there is one philosophy: the V diagram.
That basically means you start left top (you analyze top down the mission) and then when you have
a clear view of what you can do or shall do, you keep on going building the mission bottom-up. At
the very high level I’m given the mission objective (I want to look the solar cap; I want a step forward
in navigation and control). Now this is a very high-level qualitative concept. We go through let’s say
breaking down the qualitative concept up to the quantitative elements that I need for satisfying
those kinds of objectives. From objective to functionalities. (ex. reaching the solar cap). Here I start
quantifying the tree of the solutions. Still qualitative but start having in mind something that I can
compute (time, deltaV and cost) to compare the possibilities. And then I jump in a potential design
with components and subsystems. Now I pass to components (this is the propellant that I need, this
is the launcher that I have selected, that are the thrusters selected for precise pointing during the
gravity assist). You start from something qualitative to reach quantification. At this point you arrive
to the so-called baseline. So, you went through: functional analysis, criteria to compare any
alternatives, identification of alternative, sizing to compare the criteria, selection (which is the best
compromise according to the criteria that you selected). And then I have my system “on the table”
in each subsystem and phase that I have. Now I know what I want and how it is shaped. At this point
I jump on the other leg of the V diagram and I start building it. Now from here I do bottom up. Now
I start building or buying the solar sensors, the tank, the thrusters. I put each element on my lab, on
my clean room. I check if the performance is what I expected. If it is correct, I certified that what I
designed is in line. If not, I correct the expectation on the design. I do with all of them in parallel and
start building the system. I grasp my solar sensors and I put them in a chain with the electronic board
and with the actuators and I check all of them with the incoming Sun and if the actuators are
commanded as I expected. If not, I come back and do a correction in the verification. The
documentation is updated to the real elements. Now that I have my ADCS subsystem I connect it to
the structure and I see if the whole structure with dummy masses is controllable as expected,
physically on the lab. Up to when I have the satellite built, the software built and I do the final testing
in the thermal vacuum chamber, on the shaker etc. Have in mind: you start from the system and go
to the subsystem. Then you start from the subsystem for the design. And you build the whole
satellite. You continuously cross check that what you expected is what you have. If not, manage in
risk, costs, schedule, in the requirements verification/redefinition.
V-diagram: start from the objectives, you arrive with the product. This is your baseline design. And
here you have you building, testing and integrating. And when you do that through the
requirements saying ok I will have an intersatellite link because I need for the formation flying. I put
a requirement on the accuracy that I want to have in the systems. When you build the antennas and
transponder on the two satellites you remember that you have to do this test and cross check if you
put the 2 antennas talking one another they are capable to give you this measurement. In this way
you give a message to the testing. If this is not correct you have to understand why. Is a matter of
technology, of software of something wrong or is this requirement unfeasible? Then changing the
hardware or chancing the requirement.
To manage all that the whole is standardized. There are phases: situations in which you have to get
a precise output, you have to demonstrate that you reached them. If not, you cannot jump on the
next. These are the classical project phases in space.

Pre-phase A: is the conceptual idea. Visit cap of the Sun. Does it make sense or is it completely
unfeasible? This is something that is typically done at a science or agency level. To verify if at a
mission level is feasible and if makes sense is necessary a more detailed analysis. Typically, one, two
months of work.-
Phase A: In this phase you enter the subsystems design. You identify the functionalities of the
mission, the criteria to propose the alternatives. You identify the alternatives in each of the
subsystems. I might use a monopropellants propulsion unit or an electrical one. And you arrive to a
feasibility situation. At this level you have a mission that is feasible with the satellite that is design
at the subsystem level but keeping still the alternatives open and having identified the most
important drivers. The most of the boundaries for the design and then the implementation of the
mission. Whenever you have an end of a phase there are specific meetings that you have to do with
your clients. The mission design review, the preliminary requirements review (the output is a
document).
Phase B is the most important before starting building. Is the one in which you prepare the technical
design just for implementing afterwards. You define the baseline. There are no other ways to build
the satellite. You select which is the components that will be on board for each functionality. You
define the operations: which and when. Communicating with which kind of station. You will start
having an eye on the frequency booking. At the end of this you shall be in a situation in which all the
technology that you have on board is at least at TRL 6. Otherwise, you would not go through fly.
Otherwise wait until the technology is ready to flight.
Phase C: You have two important steps: one is the critical design review and the other one is
basically the review of the implementation. At this level you start breadboarding what is critical in
the mission: start testing part of the components, start buy or building the components. At the end
of this phase you have all the hardware in house. The software starts to build developed. But can
start only after the design is fixed.
Phase D is devoted to integrating, testing at each step of the integration and doing the qualification
and the acceptance. Is not the same. The test could be qualitatively the same, the way you do the
test could be difference. The acceptance is just on the unit that is going to fly. The qualification just
to understand if components will survive to the worst condition you might encounter.
Phase E: Then at this point you deliver the satellite to the launch provider. You have the launch and
operation phases with the mission operation center and the science operation center. Who is going
to manage the mission? Then the same you will start implementing, booking the ground stations,
doing tests for the signal from avionics to ground station on the ground.
Phase F: How the disposal would be? You cannot pass this phase if there are no alternatives on how
to dispose at the end of life. This leg is important. Will lock at all steps forwards to the mission
implementation.

SSEO Lecture 08/03


By Giuseppe Oliva
Many calls of ESA do not only have technological purpose, but they can also ask you to provide a
certain output at each phase of the space project; up to phase B you basically don’t design to build
the product, but rather you design, analyze, discuss to define the requirements and “fix the rules”.
This is done to understand if what you have in mind is correct. Open the door to alternatives to be
sure to spread over the search field for your variables and solutions.
➔ Start having in mind “milestones”, which are standardized and expressed through set
acronyms.
o MDR: Mission definition review
o PRR: Preliminary requirements review
o SRR: System requirements review
o PDR: Preliminary design review
o CDR: Critical design review
o QR: Qualification review
o FAR: Flight acceptance review
Nomenclature may change a bit among ESA, NASA and others (e.g., JAXA, DoD,…).

On the horizontal axis you can see the phase you’re currently at; the length of the arrows is not
random: of course, the more you move towards the implementation, the more time you will require
(for example, phase 0 may last 4 months, phase A 6 months, phase B 10 months, etc.)
Up to phase B you keep ongoing with Preliminary Design Review (PDR): start considering what could
be the baseline of your spacecraft (i.e., mission analysis, operations, s/c itself, any lander, ground
segment,…); at this stage you have complete control over the alternatives so that if something goes
wrong you can freeze the design and start over.
In phase C you start breadboarding, engineering modelling and development.
You can’t just fix a requirement and then go on with design and implementation: you initially fix the
box, then you start doing analysis, test and verify so that later you can come back and correct that
requirement and all those that depend on that; this is typically done until end of phase B.
In Critical Design Review (CDR), at the end of phase C, you should have fixed the architecture, which
means you will have selected the proper reactions wheels, tanks, sun sensors, etc. At Preliminary
Design Review (PDR) you had defined the baseline needed: for example, you may have requested
to have sun sensors for attitude determination, but, as a backup, you may still consider, say,
magnetometers; you still put on the desk the possibilities, but these are not yet definitive.
You keep designing until Final Acceptance Review (FAR) when you take the payload and deliver it to
who is building the satellite, or else to launch provider, etc. Make sure that whatever you are
building and testing is compliant with the design. This is just to say that there is no “crisp end” to
design, but rather it is a continuous effort.
As you see there is a sort of funnel: that means you shall be broader at the beginning; gauge all the
“strange” ideas you have, because this will make the final baseline definition richer and more robust.
Then the funnel becomes narrower and narrower towards the operations of the system. Cross-
check you’ve done all the tasks in the box before jumping to next step.

In the upper part you can see what the external world is in charge of, while below what the agency
does oversee. There is a continuous passage between the two. Missions are fixed and defined at
very high level by the agency together with scientists and stakeholders (for example, looking at solar
caps, sampling mars soil, drilling a comet, etc.)
Beware of science requirements: if I’m asking to drill at 0 K maybe that won’t be very feasible; specs
are given by scientists to the industry through the calls, and industries simulate, understand,
establish criteria, optimise, propose for selections, etc. In Phase A you fix requirements going a little
bit more in detail with respect to what is feasible.
Example: you need a drill to collect icy samples, so determine which kind of friction is generated,
understand if with a drill of 1 cm diameter and a velocity of few mm/min it is feasible; you don’t
care about design, but rather about requirements (e.g., sampler shall move at velocity lower than 1
mm/min, diameter of tool shall be no larger than 1 cm, etc). Whatever you do in analysis, it is used
to write the requirements: this happens up to phase B. (aridaje)
The agency judges if the design is ok; in affirmative case, you can keep ongoing and redefine the
specs (so new requirements), then output another call for new requirements; otherwise, if it is not
ok, they come back and ask for something more. Usually, agencies allow to have more than one
study in parallel. ESA gives the same specs to different companies and, when these give out their
requirements and potential designs, ESA “closes the funnel”, which means they select one of them
in terms of better requirements list and better conceptual possibilities to proceed; eventually they
close the path, and begin to design in detail, fix requirements in detail, start testing, integrating,
implementing, qualifying and launch.
Now that you have a timeline, according to which you have very precise milestones, how you carry
on the work can be done in very different ways. This involves not only the space industry but also
aeronautics, automotive, and whenever you have a complex system to deal with.
Even the so-called life cycle is a “sui generis” dynamic: indeed, you have free variables, control
variables, so you can analyze and model the dynamics, and make decisions on how to control it; this
involves personnel and production, but it is alike to a classical dynamics problem: you can model or
control it in different ways according to what you want. In this sense you have criteria to select your
approach and here some of those are reported. It depends, for example:

- if the project is quite well-known, so you are just building something that is recurrent, or it
is completely new;
- if you can have a high level of risk, or you cannot risk;
- if you have no money, or you have it;
- if don’t have time, or if you have it;
These could be drivers for selecting the approach. The main modelling approaches for life cycle
management are reported in the list. Lavagna does this very quickly as she knows we hate this part
=(.
Sequential/waterfall approach: very old, still applied
sometimes. Approach is sequential:
- You have a problem
- You analyze problem and understand the goal
- Fix requirements qualitatively and quantitively
- Close the door (no one can touch requirements
anymore)
- Start designing
At each step you “close the door”, that is you fix permanently
the requirements, and go on to next step.
This could be efficient if you don’t have the whole personnel
present all the time, however inefficient if you are dealing with
something completely new and unknown (this happens
frequently in the space domain, but also when you design the
fiat 500).

No more adopted in space because there are a lot of drawbacks: notably, you cannot input “lesson
learned” from design to requirements, from implementation to design and looping.

→ suboptimal solution, apart when you perfectly know the product.


A driver could be that you don’t have time to loop and reanalyze, so let’s constraint everything and
deliver product on the market ASAP.
Incremental model: hybrid approach, you have a sequential part at requirements level and very
high-level system design, then you start parallelizing the productions and putting in operations.

For example, start doing the first lifecycle for one unit
and then afterwards you do the same for the others. In
this case the drivers could be:
- When you have a system quite well-defined,
requirements are understood known and already
frozen before;
- You might have no financing or resources at once
to put it all in place in parallel;
- You might accept to have more than one delivery.

Example Galileo constellation: you fix boundaries for design, fix high-level design of the whole
architecture of the constellation (what you want to do, which are metrics you want to preserve,
precision for users on ground, frequency, …). Having that, you don’t put in place like Starlink’s 60
satellites all at once, but you deliver in bunches; in this way you still give service, start having
revenue, and moreover this helps you understand any criticalities arising from flown satellites,
which can be resolved on new bunches (of course you can’t change the whole design, but you can
tune the implementation itself).
Evolutionary model: enhanced version of the former; you are not so well-defined in requirements
as a whole, but you want resources, risk, cross-checking that design is ok with implementation; you
don’t put in sequence just the phase C, D, E, F but you put in parallel the whole, so you do what we
saw for sequential but for more units: you’ll obtain the whole system only at the end of the story.

Example Sentinel fleet: made up of two radar satellites (Sentinel-1 and 2), and many more. They are
launched in sequence and have different functionalities (one is radar and focus on crust, one is
multispectral and focus on ocean and risk on Earth, one focusses on atmosphere, etc). Overall, the
goal is to monitor the Earth, but the individuals are specialized; each of them has requirements that
shall be analyzed, understood, traded off and fixed: you shall do this for each of them. Indeed, what
happens with service of the first may change the
other ones.

Spiral model: just for your knowledge, usually applied


to software development; quite similar to
evolutionary model but includes a strong driver that
is risk: you split in time the subproducts to manage
and limit risk of products on the market (version
alpha, beta, final release). You put on the market a
still unstable product and it is up to the user to debug
software, stabilize, update, etc.
It is a spiral because what you gain at end of step 1
(alpha) re-enters at beginning as feedback for the
same product.
V-diagram: classically applied and standardized for space. Instead of having a line you make so that
two elements can communicate continuously to each other. Timeline is from left to right, you start
from concept, dig into component definition, verify, test, assembly, and you finally arrive to delivery.

Example GRACE: at beginning you want to measure the acceleration of the vehicle up to 10e-5
m/s^2 but this requirement is not fixed. At design level you start looking at which are the products,
state of the art for accelerometers (European or American? What does this entail?). When you write
requirements there is a column asking you how you will perceive and measure the acceleration that
you proposed. This allows to connect whatever you write and what is going to happen in the future
of the project, and this saves you from being caught in a mess of integrating and not being able to
satisfy your requirements, because they were wrong or incomprehensible.

“Verification” means that you check the things you have perfectly respect the requirements: you
check that your accelerometer measures accelerations up to that level, plus or minus the accuracy
you put in the requirements.
“Validation” means that your object does satisfy the objectives of the mission: maybe the
accelerometer satisfies accuracy with respect to requirements, but then all over the mission you
don’t need an accelerometer but a fine magnetometer. The object must make sense in the context
of the mission. Thus, you check not only that the single object does what you required, but also that
it makes sense in the domain in which you are.
The mark “ECSS-M-ST-1000-ESA” means:
- ECSS: space standards
- M: management
- 1000: ID of the document
In this document it is explained how a lifecycle for space projects shall be managed. On the left you
can see the phases, tasks and activities. Up to phase B you go down on the V-diagram, then when
you reach PDR (meaning you’ve delved into lowest detail of the project) now you’re ready to build
the block and you start phase C coming back upside on the V. This way of proceeding is strictly
mandatory when you work for ESA, NASA, JAXA, etc.
At beginning of lifecycle, the costs involved are quite small: you have just to pay the “poor”
engineers that do calculations and requirements, and the PCs for numerical calculations; however,
in this early phase you are committing almost half of what is going to happen afterwards: it’s here
that you make the decision about what are you going to spend for what and fix the baseline for the
mission; history of the mission is fixed here, the cost and personnel of operations stays in the design;
hence the cheapest stage is the most important because you set the destiny of the mission in terms
of how and what you will spend for its implementation.
The graph on the right is alike: it represents the investment you put in the design versus the
discrepancy in cost you have at the end of mission; the hyperbola suggests an inverse relationship:
the more you spend in design, the less you will invest while building the system.
ExoMars started in 2003 and it is still to be completely launched (first launch in 2016); at very
beginning it was a simple mission with just a rover and an aerocapture, then, due to too “optimistic”
estimations of costs and a not very well-tuned V-diagram, mass and number of sensors grew up;
curves were not well-designed so Europe was forced to ask for help from AMERICANS; but then of
course they wanted the rover to be fatter and fatter with payload so they had to call the Russians
and it became so fat that they had to split the mission in two =/.
The same happened to GOCE: there was a problem with the new electrical propulsion such that in
a few weeks the satellite would have fallen. Provider for the propulsion systems readily disappeared
(lol); Thales had to rethink a new thruster: but this means linking mechanical, thermal, data
interfaces with the rest of the satellite, not only just finding a new supplier, so you have to redesign
these all over, entailing a lot of “insulti e quant’altro a mezzo mondo” (sic). Think of thi s: what if I
will be in the condition of having to face short-stopper, limitations and whatsoever? You shall
decline this in the requirements on one side and in the design with credible and compliant
alternatives.
Be sure to have alternatives when you design, so include backup on launch, trajectories but also for
example on the country you’re getting your components from.

Example AGILE (Italian satellite): at some point they needed to change launcher to PLSV (Indian
launcher) right when the satellite was ready to be launched. But then AMERICANS forbade to deliver
satellites to the Indian launcher because there was a battery from US onboard covered by ITR
(regulation for technology potentially used in military conditions). In that case India could not launc h
with that battery because it was property of US military, so they had to disembark the satellites,
find another battery to be put on an Indian launcher: in short, times were prolonged, costs
skyrocketed, a nightmare =T.
Example Mars Express: GANTT diagram, organization of what we have discussed so far in a practical
way. Fill into the graph all the phases from design to disposal, durations, meetings; breakdown with
respect to all aspects you have on board (you have to define schedules and phases for each of the
elements that you have on board)
Writing documents: lavagna shows a document to frighten us a bit. You see depending on phase
and milestone you have those documents to be released. E.g., for Project management plan, the
fact that you deliver the same documents in time means that you refine through that V approach
continuously what you wrote before: draft of the requirements → draft of design → enter design
in detail and refine → update documents, etc
If you don’t deliver you stay pending. There is a document specifying how to write a document (for
each of them). Imagine you have a satellite in space, there is an anomaly, and you don’t understand
its behavior: if you didn’t write properly documentations about design, testing, you can’t save your
satellite; this is just to say it’s not a waste a time to write this kind of documentation.

On the left and central sides, you start from top: start from what you want to realize, split into
components or, better, functionalities, then for each you start discussing about tanks, kind of fuel,
materials , shape, etc. for each step you define requirements.
Lavagna expects us to have results and detect what has been done beforehand with this kind of
approach to lifecycle to come to that product.
Looping scheme:

- do first requirements: e.g., land on Mars, landing shall have a touchdown velocity <1 m/s
vertically, 0.5 m/s horizontally
- start functional analysis and decomposition of the system
- understand main functionalities
Think of what and not how!!!!! Functional means: You have to launch, transfer, prepare the system
to be ready for operations, talk to someone that dumps the data, gather energy somewhere, etc.
then you enter the design: how can I enter the orbit? Single launch? Shared launch? Plane launchers
(like Pegasus)? Start putting on the desk alternatives and prepare for analysis. With those
alternatives you start understanding and designing and getting some numbers.
For example: we asked for max 1 m/s vertical velocity in landing; where does this number come
from? Does this make sense? Is this feasible on the planet you’re landing on? You size and analyze
because of requirements, not because you’re building the system at this point.
You also walk on so-called ConOps (conceptual operations), that are the timeline of your missions;
in general, ConOps also include the phases we saw but a very high level: how long will the phase for
launching be, which will be the phase for commissioning (preparing S/C for being oper ational, what
I will do there); in short, you plan the macro activities the system has to do to answer the
functionalities you devised; start building the time history of the system together with requirements
and then you loop. If you’ve done all you needed at system level you can go further and start
analyzing the subsystem; if not, you don’t close the loop and keep analyzing the functionalities.

Example mission:

- objective (high level): sample Europa soil and collect 50g of ice
- technical requirements (from stakeholders or engineers): need to stay in Europe, mission
operational within 2030, …
- functionalities and architecture: reach the surface (how? Transfer, land, operation, ascent,
come back → one vehicle for 5 functionalities). One object can answer more than one
functionality (eg, camera determines attitude AND images planets), or you may want to
assign one functionality to each object.
- Trade-off
- Conceptual operations
- Analysis: check if you’re okay with level of details otherwise come back

Functional decomposition
Basically, once you understand the tasks to arrive to the final goal (monitor the ocean, variations in
natural elements on Earth,…), flow backwards and define what you need: pass over the whole planet
or just some regions, stay on orbit, reach the orbit, zone illuminated by Sun or not,…
From this you jump into system requirements; for example “have to launch” is a requirement,
“system shall be launched by a medium class satellite” answers functionality of being put on orbit.
Functionality “to be put on orbit” can be answered by:

1) Launcher to put me directly on orbit


2) Launcher to put me somewhere and then use propulsion to get where I want
Same functionality, two alternatives to study. Put on the desk all possibilities, do some calculations,
fix requirements (NOT solutions!). If you analyze functions, you also open the door to solutions: at
the end you’ll need a launcher, a propulsion unit.
(Very stupid) example: Icarus tried to understand how to fly analyzing the physical components that
birds use to fly (legs, eyes, brain, wings); using this paradigm, man was not able to fly. On the
contrary, planes are able to lift from ground because functional analysis has been done properly by
aeronautical engineers (duh). Looking directly at birds, we can see that they merge both thrust and
aerodynamics in the wings; however, this is not mandatory, but you can assign these 2
functionalities to different devices: thrust to thrusters and sustainability to wings. Once you
understand that thrust and lift are functions, two distinct physical components can be assigned to
them.
It is perfectly acceptable to assign two or more functions to one physical component, whereas it
would be a mistake to assign one function to two physical components.
Each of the functions at top level can be detailed: e.g., perform missions → arrive to orbit, provide
resources, compute attitude, … At some point you arrive at a level of “granularity” where you
effectively start to think at possible solutions.

Requirements are classified in:


- Functional requirements define what functions need to be done to accomplish the
objectives: “system shall be launched in Sun-synchronous orbit”
- Performance requirements define how well the system needs to perform the functions:
system “shall be launched in SSO with max mass of 1.5 t”, “accuracy shall be > 1°” , etc. These
need a bit of analysis to compute the numbers, not for sizing (analysis should be broad, not
linked to a specific technology, otherwise you implicitly impose the solution).

Example:

- Objective: Landing a man on the Moon


o Functions:
▪ Accelerate astronaut out of the Earth’s gravity
▪ Transport astronaut to the Moon’s surface
▪ Allow astronaut to walk on the Moon
- Objective: returning them safely to Earth
o Functions:
▪ Provide life support system from launch to landing
▪ Accelerate astronaut out of the Moon’s gravity
▪ Transport astronaut to Earth’s vicinity
▪ Return astronaut to Earth’s surface

Example Inspector:

Spiraling object to image debris object, understand correctly shape and prepare for removal. To get
there, there is a first level of breakdown (spacecraft and ground); for spacecraft, you see primary
functions (not in time order); for each function, you have to put in place a series of functions.

➔ System Functional tree: there is no hardware inside, no definition of how, but just what;
functionalities, not solutions

If you don’t do this you may lose something, and you don’t characterize relevant parts of the system.

4.2 CONCEPTUAL OPERATIONS (ConOps)


Concept of functional analysis is important as a path from the main objectives of a mission/ project/
program translated into functionalities (=what I want to do) -> break them down until we reach the
component levels. In this domain another way to decline the functionalities is analyzing the
conceptual operation: let’s start putting our activities on a timeline and connect them sequentially
or in parallel -> I know what it is planned (logic of the
activities we have to do) and schedule (connect
actions and activities following a specific sequence
that shall be respected -> if one activity is not
satisfied we cannot go on) => we need to create a
state vector with the variables (discrete, continuous
or qualitative).
OSS: This operational classification has to be done at
every level of the lifecycle -> as you go down in time
you add more details
In phase C we have to define the operations with respect to what we do in the lab or with respect
to what we do in terms of performances, not only in terms of satellites as soon as we launch it->
ConOps start from 𝑡0 (when you make the decision to built a system) up to the end of life (disposal)
because you need to plan and classify all the operations in each of the phases.
In red we highlighted something that
grasp the idea or concept of what we
need to put in sequence, like
simulations, functional tests, choice
of facilities, etc. -> we have to
schedule the facilities with a large
advance (at least 1 year) with respect
to when we are ready with the
hardware, otherwise the facilities
are not available.
During the launch, apart from the
one that are obvious (being
detached, opening the wings/any
deployable systems) but we have to
be sure to activate the batteries first.
EXAMPLE: some ConOps can be:

1) Detached
2) Verify that I have detached
3) Switch on the batteries
4) Verify that we have voltage and current in the batteries -> I need a source on the line of the
battery => input for the design (I’m not designing yet)
5) Open solar panels or deploy an antenna
6) Check if panels are opened or antenna is deployed
7) Detumble (we can decide if detumble before of after the deployment, before of after the
batteries activations, etc.)
OSS: If wheels require a large amount of power maybe we can wait until we have the power
coming from panels (instead of using the battery)
Since now we are defining the boundaries, we have to address some requirements (ex: size of the
battery), but only after having defined the operations and the sequence.
OSS: Do not forget to define the safe mode of the operations (robustness of mission) -> fix
requirements in case everything goes wrong -> typically if the spacecraft is completely lost there are
2 functionalities that we must guarantee:

1) Communication -> the s/c shall be able to communicate to someone that it is in trouble
2) Power -> needed to ensure communication
EXAMPLE: Imagine the spacecraft to be in eclipse and it detects a failure on board -> we need to
fix the requirements:

- Any power storage shall be sized considering margins based on when you enter and
when you exit the eclipse phase -> I need to survive from one link on ground to the
next one. (it is fundamental to communicate to ground the s/c condition) => I need
to talk to mission analysis and TTST teams because they define the window between
one link and the other
OSS: The requirements is not saying how to solve (which battery, ect.) but says what
we have to take into account.
- The telecom subsystem (in safe mode) shall be capable to communicate no matter
of the attitude (the possible solution is an omnidirectional antenna) -> requirement
on the ground stations (you need to be sure that the network on ground would be
capable to get a low signal-to-noise ratio signal).
OSS: When you get a telemetry that is not nominal, you have to react immediately, but
you don’t know immediately which type of failure you have because you need a bit of
time to manage and rearrange all the data you get from the satellite => we need to put
the s/c in safety and then have time to reason without losing the satellite
In general, when you are not in nominal you don’t have to solve; first of all you have to
detect (mandatory point) that the system is not in nominal, then to make the decision to
recovery the system or not
On the right we have an example of a very
high level ConOps for a mission to the
Moon.
At the beginning if very rough but then
you start adding more details:

- You cannot enter the orbit before


transferring
- You cannot get on the surface
before transfer
- Etc.
This details may be stupid at this level but as we go on with the mission analysis we will realize that
they are fundamental.
OSS: Writing down the ConOps is a looping procedure (you start with the very general one and you
keep come back and add more details).
During the launch I have to make sure that all mechanism are acting before the deployable elements
are really off -> I need to check it when I mount it on the s/c and I need to write it down on paper.
On the side we have another
example for a lunar program (more
complex than before). First we need
to launch and have a system on the
surface, then another launch will put
the station on the near rectilinear
orbit around the moon. After that
we release a lander on the Moon
and there will be a continuous
passage from the surface to the
station and from the station to the
station and so on.

The ConOps of the Mars Sample Return are quite related to the functional analysis and the
architecture. In this case I want to catch sample from Mars and have them back robotically; if we
start analyzing the functionalities, I can:
- arrive there with a
single vehicle, collect,
ascend and then come
back
- arrive there with one
vehicle, collect and
ascend only with part
of the vehicle
- etc.
In the real mission they
decided to launch, have an
orbiter and arrive on ground
with a large rover, then they split the functionalities:

- I want to collect the samples -> one rover


- I want to put the samples in the ascending vehicle -> I can use the same rover or I can use
another one (for reliability: if one fails you have the other, if you have both you are quicker).
- The small ascending vehicle will be captured by the orbiter (is not directed directly toward
the Earth) -> even if the ascending vehicle fails I still have the orbiter and I can do science
We can see that the architecture was selected related to the functionalities and the choice war
driven by robustness (it couldn’t be possible to do all of this with just one vehicle).
In general, even at this preliminary level, we have to specify who will take care of these ConOps =>
ground segment design -> we need to understand why that specific ground station was chosen, who
is doing what, for answering which functionalities, etc.
In the example we can see that
this satellite has more than 1
ground station -> different
functionalities:
- S-band -> data that are
not so heavy from the
data budget point of
view => middle
frequency
- K-band -> science data
(different antenna)
Now that I have 3 stations, I
need to decide if these missions
shall arrive at the same mission operation center, science operation center, when and how I shall
transfer data from one to the other, etc -> local management of antenna.
OSS: It is important to look also at the “story” of previous missions -> we may find something helpful
and similar to our mission (we need to motivate why they choose that number for the antennas,
why that specific band, etc.).

EXAMPLE: Debris inspection (launch in 2025)


Requirement: mission shall be launched/be operative before 2025 otherwise the debris will
reenter and we cannot observe it anymore -> the lighter I want to be the lower in thrust I
will be and the more time I will need (this happens because we are using electric propulsion).
First ConOps analysis is:

- LEOP -> 2 weeks


- Transfer -> 6/12 months -> why 1 year from parking orbit to target (in Earth
environment)? If, for example, we have a plane change maneuver and we want to
exploit the J2 perturbation, it requires time to change the plane from the initial to
the desired one (the lower we are the higher is the degree of precession we gain) ->
the problem is then how to stop and fix the orbit.
- Inspect -> 2 months
- Dispose -> 2 months
The durations of each phase can be chosen by scratch (past missions), or we can assign it a
TBC value -> we always need to do some analysis.
After we put the dispose, once we have identified which is the best orbit to stay, then
depending on which condition we are (if regulations are respected -> above 570 km we need
to use propulsion, below 570 km we can exploit J2), we can shape the time.
4.2.1 Modes
When we design, we have to write down requirements; when we analyze an already launched
mission, we shall analyze the phases (periods in the ConOps that happens once) and identify the
modes. For example, during LEOP we have:

- Boot -> it might be an action that after the failure mode (or safe mode) we do to check if the
system is working (switch the satellite off and on)
- Determine the state
- Deploy solar panels
- Communicate
- etc.
Note that these are functionalities but also activities that can be put in a sequency.
OSS: Launcher has regulations in the interface whether if we switch on the pc before detachment
or after (when the s/c is far away).
In general

4.3 SETTING CRITERIA


Now we have the limits and we need to start the design (implementation) -> we need to think about
alternatives and we must have numbers to compare them and therefore we must run simulations
selecting the criteria we want to judge the alternatives against.
Typically, we use a multicriteria approach -> multi-objective optimization or multi-criteria decision
making techniques with mixed (non-continuous) variables (number of vehicles, on-off of the
systems, etc.).

Classical criteria used to compare the alternatives are:


- Mass -> always ask: if I select the mass, do I have mathematical relationships (qualitative and
quantitative) or statistical data so that I can get that number for my alternatives? If I have, is
a good criterion, If I don’t have then it is a criterion that I cannot manage because I cannot
compare them (‘reliability’ is good criteria, ‘complexity’ is not)
- Power budget
- Δv budget
- Pointing budget -> accuracy
- Link budget
- Cost budget
- Risk assessment
- Mission specific budgets
OSS: I have to understand which are the criteria used in the final configuration (find the drivers) and
the alternatives that could have been done but they didn’t do (and understand why).

4.4 REQUIREMENTS

Let’s see how to write the requirements -> fundamental aspect.

ECSS-E-ST-10-06C -> standard requirements that come from the branch of system engineering
Requirements are classified in:

- Functional -> define what has to be done (ex: the s/c shall land on the planet)
- Operational -> define how it has to be done (ex: the s/c shall land softly on the planet) =>
only when something happens, the system can start a specific procedure
- Performance -> define how well the requirement shall be satisfied (ex: the s/c shall land
softly on the planet with a vertical velocity less than 1m/s)
- Verification -> define how the requirement should be verified (with tests, numerically, etc.)
OSS: If I don’t specify that I need verification, nobody does it => need to put requirements also on
that.
When we fix some parameter that will influence address the design, we must make the decision
whether what we are saying is ‘mandatory’ or ‘nice to have’.
EXAMPLE: I want my system to land with a vertical velocity less than 1 m/s -> mandatory => I write
‘shall’
If it would be nice to have the s/c to land with a 0.5 m/s velocity (because it will limit the
motion of the dust and the pollution of the mechanism, or because it will make a simpler design) ->
not mandatory => ‘should’
Mandatory -> requirements that allow you to design the mission and if we do not do them then I
cannot do anything.
Nice to have -> requirements that enhance one aspect of the mission but is not mandatory to built
the mission (if are not satisfied the mission can still be accomplished).
EXAMPLE: I’m collecting icy samples -> samples shall be kept at a temperature lower than 70 K
otherwise they will melt (sublimation). If I can keep them at an even lower temperature, then it
would be nice in terms of science or because it allows to enlarge the transportation of the samples,
simplifying the robotics.

Requirements must be:

- Achievable -> always ask yourself if what you are asking is something that can be done or
not -> simulate, built, etc. and understand if you are capable of doing that.
- Affordable -> can we built a nuclear reactor for power supply on a planetary base if I need
to launch in 3 years? Is it affordable with respect to the resources that I have?
EXAMPLE: I can use a chemical propulsion to get to the final debris, but if I have the limitation
to stay in a CubeSat size I cannot put a 20 N thruster in a CubeSat -> is achievable but not
affordable.
- Justify -> always specify the reason why you made that choice (everyone that reads the
requirement must be able to understand it)
- Unambiguous -> each requirements shall be clear to which one is connected and why (it
should not be ambiguous and not a copy of this)
- Shall be validated -> always express how you will verify that a certain operation will be
accomplished
OSS: You shall write “what” and not “how”
EXAMPLE: The system shall be capable to detumble from a maximum velocity of 10°/s ->
good hhhhhhh requirement.
The system shall have reaction wheels to detumble from 10°/s -> not good
requirement (you should not address the solution uniquely)
Let’s have a look at the picture below; on the top we have the classification of requirements
(functional, mission, interfaces (ex: the system shall have an interface with the launcher; the system
shall interface with the ground station with a certain frequency), performance, verification, etc.)

When writing functional requirements, we can ask ourselves if we have any indication for the orbit,
pointing, coverage, timelines of the data, etc. -> use those as tool to stimulate your thoughts.

4.5 RACE MISSION


RACE mission is a proposal from ESA that is not implemented so far (child mission of PRISMA). The
objective is an orbit demonstration and validation of technology (relative dynamics robotically ->
docking and undocking) in 2 different configurations: when target is no cooperative and when it is
cooperative (it will control its attitude so that I can nicely approach for docking).

One fundamental aspect is the definition of autonomy in space (it could become a requirement):
We need to define a code that allows us to do activities time-based -> what we saw In the ConOps
but now we are event-based (and not time-based) which means that, for example, whenever my
rotational velocity is under 0.1 m/s then I can deploy my solar panels => we are reasoning.

OSS: We have to test it really carefully otherwise it can be dangerous.


We have to make decisions on what to do when, but also make a decision on the goal of the entire
mission (typical scenario regards rovers -> where to go next based on the environment).

Mission requirements and ConOps are classified:

The fact that we have a duration of 6 months should drive us in limiting the analysis we have to do
-> we shall have an orbit that allows us not to decay before 6 months and if we are on an orbit that
does not decay naturally then we have to choose a propulsion system for that.

We have an important limit: we shall use a technology that is already at TRL 8/9 (I cannot imagine
any strange component because I don’t have time to implement it => I have to find solutions that
have a sufficiently high TRL so that I can easy test them and before 2021 they are ready to fly and
will work as a system and not as a payload).

OSS: When we start analyzing the goals we can put TBC -> we can put this as place holders, we
perform analysis and simplifications and then we come back and put a number on that.
This requirement is important because tell us that the s/c can perform any relative maneuvering
only if there is a ground station which can have the visual link with the s/c for driving, if needed, the
critical maneuvers (tele commanding something, stopping a phase, etc.). This requirement imposes
other constraints: typically, Δ𝜃 is smaller than Δ𝜙 because we have the masking angle (I keep a bit
of margin because I have atmosphere attenuations, mountains, etc.) => there are natural dynamic
limits in getting one close to the other. Typically, in LEO orbit as soon as the satellite rises is not
immediate to link: we send a message and we need few seconds for the s/c to recognize it => I need
to size my relative dynamics to be effective with a strong Δ𝑡 for doing the specific operations.
OSS: If we want the 2 s/c one close to the other we can perform a phasing maneuver -> we need a
certain Δ𝑡 -> am I capable of doing this kind of maneuver in 5/6 minutes? If not (maybe I can do it,
but I need a peculiar instrumentation) I shall define other alternatives; if yes I shall add other
requirements (I can require to place more antennas on ground so that I will have more coverage ->
ground network shall have a at least gap in visibility not larger that 30 minutes/half a period/TBD).
OSS: When I what 2 s/c to be close I have to make sure that they do not crush -> different ways:

- RF
- Imaging -> if I use a camera it will at least help identifying the target => requirements:
testing/approaching/fly-around maneuvering shall be done when the target is illuminated
(is in light) -> not only I need to be in visibility of a ground station but the target shall also be
in visibility of sunlight (if I’m in eclipse I cannot do anything => maybe I cannot complete the
mission in 6 months)

It states which kind of experiments we have to do (it is mandatory because is a “shall ”, but the
number has to be confirmed).
Why target and chaser have different degrees of freedom? One of the requirement was to have an
uncooperative target, but since it is only an experiment I want to tune the capability of the chaser
with respect to the level of complexity in being uncooperative of the target (I can use the 3 dof of
the target to compare what the chaser is capable to reconstruct in terms of attitude control with
respect to the real attitude of the target) => I start to put different initial conditions for the different
phases of the mission and I set the levels of accuracy.

SBOBINA 14/03
The lesson starts analyzing the ESA document ‘Preliminary Mission & System Requirements for the
Rendezvous Autonomous CubeSats Experiment (RACE) IOD CubeSat Mission’, partially reported in
the current section.
As suggested by the title of the document, a part of the mission design requirement document
(MDR), for each mission it is mandatory to fix requirements in the first stage for mission and system
(phase A).
During this crucial phase it is
mandatory to define a level of
autonomy from the beginning. For
example, during the NASA mission
Mars Polar Lander/Deep Space 2,
due to the high response time needed for Earth-Mars communications, the rover must have had a
level of autonomy for setting the cost effectiveness ratio as low as possible. The level of autonomy
needed was being able to identify and interpretate current optic images with standard catalogue
uploaded in the system, either for new path finding for the next day or for scientific missions. For
example, if the rover is searching for water, it must aim to places or scenarios similar to the images
associated to “water” in its catalogue (iron oxide or other minerals linked to past water), without
ground control. Otherwise, the rover, but also the entire mission itself, would have had to wait a lot
because of the high response time (send data to the orbiter, wait for the signal to get to Earth, wait
for the ground center to download it and wait for the response from ground team all of that without
considering visibility windows), while having a level of autonomy is much more complex, but it
guarantees no waste of time. However, this logic must be aware of the risks that the rover may
encounter. If none of the image clusters could be recognized or do not match any of the scenarios
in the catalogue, there must be a written line of code that orders the rover to insert in safe mode,
waiting for ground control: for reducing the chances of doing something hazardous, that may auto-
destroy the mission.
Note:

TBD = to be defined
TBC = to be confirmed
Initially, during the writing of the requirement, little to no data is available. It is a good practice to
put a TBD or TBC label, being aware that I must upgrade it and insert a number (through statistic or
computation etc..), after some time.

Experiments requirements

MR-EXP-010. The requirement is about the introduction of DOFs. Since during orbit navigation and
flying demonstration it is needed a guidance and a control, I must add the whole spectrum of DOFs,
correlated to architecture and performances, at least for the chaser. However, if the mission is to
demonstrate flying navigation and performances with a cooperative target, the latter must have 3
DOFs.

MR-EXP-020. GNC software must be robust and tested on ground. Once GNC is tested, mission
analysis team must have a reference (benchmark) on ground thanks to which a general behavior is
defined. Reference performances are computed by replicating the environment. For that, timeline
is scheduled considering these tests, applied both on the environment and on the software itself
before launching, since the building of engineering models, simulations and tests requires time,
costs and personnel.
Note.
SHALL: if there is a shall, the final design must consider that thing. It is mandatory, if it is not there
the mission cannot launch.

SHOULD: it is something nice to have, but if it is not there the mission can launch in any case.

MR-EXP-030. The requirement is about uploading the software from ground. The software shall not
be closed, in order to erase parts of software and upload patches. Modes must be added in the
operational phase for testing: that enables ground control to upload patches that are already tested
on ground and then they must be tested during the mission itself.

MR-EXP-040. The number of flights around the target in the uncooperative mission must be defined
later. In this phase the three DOFS of the target shall not be used, leaving the latter as an
“inanimate” object.

Operational requirements
MR-OPS-010. The requirement is about rendezvous and docking. The first functionality that must
be granted is undocking. The undefined nested requirement is that whenever this mode is started,
the two segments (i.e., the two satellites) are already docked. Since there is not a requirement that
specifically states that the two satellites must be docked during launch or docked after some phase,
the question remains: why does the sequence start with the requirement to undock? There are
multiple reasons.
One reason is safety: the segments must stay under a certain distance limit for safety reasons. If the
segments are already in proximity with the same velocity, there is a lesser chance of collision.
Proximity is defined using the concept of keep-out zone, that is a theoretical sphere of control
defined by the capability of the chaser to control and maneuver in an accurate way guaranteeing
collision avoidance with the target segment. The radius of this sphere is strictly connected with the
dimensions of the satellites, but also with the accuracy and frequency of the sensors, filters,
controller and actuators. The higher is the accuracy of the system, the smaller the sphere will be.
Another reason is that the clamped configuration will set, for both segments, the initial conditions
equal in terms of propagation of the state vector, increasing the accuracy and leaving the same error
for both. It can be possible to reconstruct the state only for one segment, because the other is the
same.
During the undocking phase mission analysis team must evaluate if the satellite has a limitation from
a geometrical point of view: maybe there is a requirement that must be respected or there is a
certain area that shall not be touched. All of this must trigger a brainstorm regarding not the
solution, but the main constrains and problems due to the undocking phase. Why do the satellites
firstly separate and then stabilize and not vice versa? (second requirement) This means that when
they are in the unclamped position, there are two objects that behave differently according to the
behavior they had before the separation. For that reason, it is mandatory to define a precise
hierarchy during this phase: master and slave. However, many questions may rise. For example, the
target shall be switched off during docking and the chaser, as the master, must control the docking
sequence all by itself, or the target must have a sort of control? Which kind of requirements do the
two alternatives build up? Now lavagna is talking abound docking interphase: what do the mission
analysis team have to create as interface? A mechanical dock and a data transfer are sufficient or
what? Which kind of mechanical interface is needed? All of these must build up other questions and
requirements.
MR-OPS-020. Requirement about close fly around sequence. The sequence starts by introducing a
requirement for initiating random tumbling motion of the target. This condition leads to an increase
variety of tests for the formation flight experiment. However, this requires a 3 DOFs for the target.

MR-OPS-030. Requirement about the keep-out zone and close fly around.
Fly around -> Coordination of satellites flying near each other. An ellipse trajectory must be built up
for closed orbit, resulting in periodic repetitions. However, fly around can be a lso not closed
(cycloid). The minimum distance shall be lower than 10 meters (TBC), in accordance with the keep-
out zone previously defined.
For the relative motion, a question may rise: is there any condition which, according to attitude
position, makes no sense? Is there any case in which the absolute position of a segment is not
achievable in a certain attitude of the other one? I must be prepared to face the worst-case scenario
in this sense. Sizing and designing are not done according to the simplest case scenario that I may
face, but with the worst. Depending in how I initialize the rotation, I might start a particular

maneuver, depending on how I move.

MR-OPS-050. Safe hold point: height for safe operations. It must be well defined in a specific
reference frame and unit of measurements (for example meters in LVLH). This shall not be
underestimate since the Mars polar rover mission (Mars Polar Lander/Deep Space 2) failed because
of the bad defined height (in feet instead of meters), during the switch off of the thrusters. The
height that is defined could be computed by asking and viewing all functionalities and requirements:
does the system check itself with a certain level of autonomy or does it wait for ground
communication? Does the vision based S/C wait for light to illuminate a certain object that has to
reach, ensuring a safe encounter, or should it get a TBD numbers of images that build the shape of
the object?
Data requirements 3.4

MR-DAT-010. Requirements about data exchange for cooperative rendezvous and docking
sequence. Data must be exchanged in real time (measuring for example the frequency, power
onboard, etc.). Sampling time must be fixed, depending on dynamics, Nyquist frequency,
measurements of the sensors and its filters.

Autonomy requirements 3.6

MR-AUT-020. Commissioning: phase in which the system is prepared to be active, usually performed
after launch (check everything on board, sensors, communication, etc..). She is asking why do I have
the ‘shall’ perform commissioning using time-tagged command (i.e., after orbital releasing wait 5
minutes and then correct S/C position, wait another 3 minutes and check sensors, etc.)? No answer.

MR-AUT-040. During final approach, where the chaser is in the keep-out zone, actions must be
performed autonomously (continuous thrust in the last 5 minutes for example). Why? Because the
control time is limited and a ground control guidance requires additional time for response that may
not be available (because the chaser is moving at a certain relative velocity for example). All of this
autonomy must be checked before launch, looking at response time and checking if it is quicker than

an open loop ground control.


MR-AUT-150. Level of autonomy is event driven. The riskiest situation is the impact, a fast reaction
is mandatory. The system has an image and a software capable of understanding what is that
particular image as input, the system must be able to dictate what is the distance between the
object in the image or the attitude that the S/C must have in that specific situation and so on,
commanding a breaking maneuver etc..

4.2 system functional & performance req

SR-FUN-010. Docking accuracy: the performance in a docking system must be identified, depending
on sensors, filters and actuators. The mission system engineer may ask if the docking system should
be built from scratch with a wanted accuracy (more cost) or built using standard pieces with their
accuracy?

SR-FUN-020. The requirement is strongly dependent to the SR-FUN-010.


Design engineers must build a docking mechanism, driven by a clever mechanical design that is
capable of reducing criticalities during the docking phase: it must be capable to face environmental
and control uncertainties, for this the team has to estimate the level of uncertainty that the S/C will
encounter, with a given control accuracy of at least one order of magnitude lower (for precise
control). Very limited maximum velocity for actuators during docking since a high velocity is
associated with a higher accuracy/risk. Magnetic docking is a little trivial: S/C must face problems
regarding saturation of the magnetic sensible parts, due to interferences. For that, shielding is
mandatory. S/C must be also able to undock, for that it shall use an electromagnetic device
(highlight/study on power demand of the device itself) in order to not have a permanent magnetic
field that could stick the two sides together when it is not necessary. One of the solutions is to have
a large truncated cone, in order to have a large margin of error that will drive the S/C’s docking
interface into position. Another point is onto material: it must face a certain contact pressure
without elastically dumping back, for no “bumping back” of the two segments.
SR-FUN-040 – The S/C cannot control without a well-defined state vector, for that a state vector
must be built by using a TBD accuracy for navigation.

SR-FUN-050 – The chaser’s rate of acquiring images of the target depends on the relative velocity
between the two, on the relative distance and in which range of situations they may encounter
(rotational velocity etc..).

4.3 system interface requirements

SR-INT-020. Interfaces are outside and inside of the S/C. Whenever I decide to want something (i.e.,
actions and reactions) that is event driven, I shall always identify and verify that I am in the state
that the system is telling me, also after a certain activity. Each subsystem shall think of which kind
of information that it needs to propagate its state afterward and communicate it to another part of
the s/c, through an interference.

SR-INT-050. Frequencies must be consistent with regulations set by ESA.

4.4 satellite physical requirements


SR-PHY-040. It is quite important to compute the perturbation found in a certain altitude, different
regarding the altitude. Usually, it is not important the mass or the shape, but the ratios (ballistic
coefficients etc…). Perturbance values must be under a certain limit.
4.6 Environment

Always ask yourself during every activity: which is the environment that the S/C is inserted in? After
that, write all the requirements that may rise.
[Link] is one of the first clean space project to remove a space debris.
The document shown by the professor is related to phase A: fixes requirements and starts to think
about a solution to remove a big satellite.
The first of two solutions proposed was an harpoon launched from 150m away, ejected as a missile:
Due to the impact the net would wrap up the debris.
This mission has analogies with RACE but in this case the target is highly uncooperative.
The requirement 040 states that a single-point failure must not occur,
and this implies redundancy. There shall be no situation that makes me lose completely the mission.
We want a robust system.
Catastrophic severity means we lose completely the mission.
Critical severity means that we lost the most relevant functionalities.
If we lose our onboard pc this is a catastrophic event (may happen because of high radiations or due
to impact). We can have more than one computer onboard but it's important that the software is
different.
The choice of the onboard pc is based on attitude determination and control which require a high
frequency and high computational speed.
Two-point failure: I have a failure but I can continue my mission. I have another failure, ok, this time
mission is over, catastrophic failure.

Example using the net solution:


• The net doesn't catch the debris, missing it. Critical failure but my chaser is still alive;
• The net catch the object but the target rotates and drags your chaser in a uncontrolled
motion, this is critical, I'm creating another debris. One option could be cutting the string to
the net, but before acting I should understand the situation doing measurements. I shall
have a threshold on the gyroscope to understand if the situation is dangerous, I shall
compute the velocity with which I'm approaching the debris, so firstly do some measuring
and then do my decision of cutting.

The requirement 060 tells that, for instance in the case of the arm, when I grasp, I should assure
there is no risk of breaking some parts (materials in space are consumed by radiations, reactive
oxygen, ecc…).
If I use the net I should decide the mesh size, what happens if it is too small or large? Even if the
velocity is really small, if I have a small antenna, I may break it and I don't want it to become a debris,
I want to capture it. So, the mesh must be smaller than the smallest appendage I could break.
I can do simulations with various net velocities to understand the force with which launching the
net and so the maximum velocity of the net.
The objective here was to arrive to a payload capable to obtain water from the soil of the
moon.
Imagine a chemical plant that collect lunar soil, put it in the plant and produce water.

PL-020:
Why that requirement?
Because I don't know where I'll land (nowadays the precise landing is some km, like from
Milan to Florence). So, we are not able to land exactly where we want, for this reason we
need this requirement.

On the moon there are sand, rocks and many other kinds of soils. So, it would be nice to
have a robotics that is able to individuate the kind of sample I can use.

Looking at PL-010: I am asked to build a demonstrator plant in lab and I must show I'm
capable of produce water from lunar soil.
How to perform tests without having moon soil? I need a replica (also for Mars).
Which are the fundamental properties of this replica soil?
- chemical composition (percentage of minerals and components)
- size of grains
- Phase change temperature (eutectic diagram to understand if the mixture melts or
not)
The size of the reactor is imposed by the 10 g I shall produce (the soil I must be able to handle
is 1kg).

To make a soil I must travel, grasp the right one, do the right percentage, and all this costs.
And I didn't even start the tests.

When I build a wheel for a rover, I need a simulant. When I develop something I need a way
to verify it in a fast way, and I find ways to verify only requirements I'm interested in.

Sentinel 1 is a radar mission made of two satellites to reduce the gap in visit time of the regions I
want to observe. TRL had to be high because the mission at that time should had been ready in a
little time.
The duration of the mission is related to the objectives. For civilians usually it is 7-10 years, which is
the time to develop and launch another one with technology upgrades. For commercial is a
compromise between investment and revenue.
For Earth observation satellites the way I download data must be similar to non-ESA missions in
order to merge easily the data from different satellites.
The bands to be covered by scientists are linked to orbit, launch site, nation of origin of the mission,
interval of mass and energy.

Timeliness 10
To satisfy it I can play with altitude of the orbit, increase ground network, use constellation for data
relay (Iridium, Globalstar, ecc…).
Example of requirements:
From one requirement, a lot are derived. This chart is related to a facility which use artificial
intelligence to understand where to land.
Requirements must be verified. Each requirement has a number, a classification (if mandatory or
else), who generated this requirement (his father requirement), verification way and review of
design (if the check is completed or not or if it must be revisited). When I'm designing I think if that
verification. There are alternatives for the functionalities.

17/03

INTERFACES
Technically speaking interfaces are something to be sized to interconnect the subsystems or
interface the main system with the external environment. They can be engineering, programmatic,
logistic, or financial interfaces.

This concept is important for the whole model-based system engineering.

Internal and external interfaces:

1) Interfaces inside the system: Interfaces can be something to be defined between two
subsystems. A simple example is the kind of support or connectors in an electronic port, to
send data from a battery to the electronic port or device, they are electronic interfaces, but
also, we can have electric interfaces, like diameter of cables, could be mechanical like
docking port, in this case it’s important to identify the mechanical complement we have on
one side and on the other.
2) Interfaces outside the system: with external elements in system design. These elements are
classically the launchers and so the interface is the adapter (we will see it by an interface
point of view and not propulsion point of view) or with the ground station.
Note: system means any system and vehicle as rover, launcher, reentry vehicle and so on, so s/c in
general.
The interfaces must be designed to match right all the subsystem related, so the requirements must
be unambiguous. The quantities are different depending on the interfaces (frequency, width of the
beam, signal to send to be recognized, all that is part of interfaces). A very important point is the
reference frame for interfaces, in particular the units adopted by the two entities involved.
Before start designing and sizing, it is up to the engineer to identify the interfaces, the constraints,
the drivers and to define the requirements for interfaces. (Ex: at very high level the ground station
shall be compliant with all frequencies adopted on board, or launcher shall have an adapter
compliant with maximum dimension of s/c).
Example: part of the document related to the interfaces requirements for Hermes mission.
The scope of the chart is to have in mind during the mission if there’s any criticalities or something
important related to interfaces.

The two diagrams are a way to manage and understand the interfaces, they are identical: on the
right there is an example, on left the general scheme with all the components in the system and the
correlations among the components through the arrows. They are basically composed by a matrix
which identifies the elements in the system (could be the subsystems, the entities, the client and
you, or the supplier, it is possible to put whatever) and the correlations.
This case is a N2 engineering diagram, so the elements are the subsystems, the interfaces are
identified in terms of systems or only mechanical, in one or both directions. The graph is useful to
visually cross check if every link has been considered.
Example: the fuel tanks, the thrusters, the solar arrays, for sure there are interfaces mechanically
speaking between them and the structure.
TRADING OFF ALTERNATIVES
Once the objectives and functionalities are defined and fixed in high-level requirements, the
activities in time begin to be analyzed (so the ConOps, correlations, interfaces) and the scene is fixed
in mind, all this job is done to open the space of alternatives.
This space is a search space used to define an interval of free variables, in this case is a mixed trade
space in which pick up the solution of a complex state vector (continuous, qualitative, and so on).

Design solution definition process

- Define the boundaries and the objectives, we can start understanding the different passes
to get what we want
- Define the alternatives (put in table as much as possible in terms of problem solutions)
- Select a mechanism to trade them (that is selecting a criteria)
- Run an optimization or a decision making or vote in the team. (Any type of mechanism,
mathematical or not, but then we have the tools: the quantity for the aspects that is valid
for all alternatives we have in place, and we can obtain these values for all of them and the
alternatives).

What practically do: find at the time being and at mission level at least one alternative that can be
applied, then understand and justify why it make sense according to criteria having select them and
no others (es. solar arrays and not nuclear motor).
The way in which run this comparison could be through simulations, with similarity/comparison with
other missions, with some prototypes (not our case). Even in a very preliminary phase it is possible
to verify the selection through prototypes, not only mathematically.

The more the alternative space is visited, the more the risk is reduced, because we are forced in
reasoning in the alternatives and the choices. We help any criticality to come up before it could
happen.
The following diagram helps verifying if we are doing correctly.
Thanks to the definition in parallel of the goals and the functional analysis, what to do and possible
solutions are identifies, so it is possible to start interacting after having defined the criteria, so how
to judge the alternatives. How to judge is important because the ranking can be completely different
depending on the quantity we select (cost, robustness, risk, level of success, final amount of data,
mass, power demanded, …).
Note: functional analysis is what to do to solve a problem or to reach a goal, in terms of analyzing
and writing the requirements, not solving already.

Pareto front

Consider a vector of variables x=[x1, x2, …, xn] to be optimized and a cost function vector
J=[J1, J2, …, Jm]=[∆v, ToF, mlaunch, lifetime, TRL, Pdemanded, …] which is the criteria with respect to
optimize (could be deltav or TOF or in the launch mass, the lifetime, TRL).
There could also be a bunch of other vectors related with inequality and equality constraints h≤h(x),
g=g(x) so everything that is function of this x, that can represent the dynamics, the maximum mass
of launcher and so on.
Regarding the case in which constraints are not considered or can be included into cost function, at
that point to solve the problem I can play in two ways:

- Multi-function optimization: consists in clustering a vectorial problem in a single dimension,


so all the J functions are grouped in some way, for example defining a scalar function that is
the sum of all the J normalized according to some reference, and possibly it is squared 𝑓 =
𝐽𝑖 𝑤𝑖 2
Σ( 𝐽
) , where wi are the weights. In this way it is possible to weight the functions and
addressing directly the optimization problem highlighting what is preferred.
- Non-Dominant Solution: to be the most general we can. Let’s assume to have just two
elements in J function, but of course it is possible to depict in hyperspace of cost functions.
So, for each solution inside the boundaries, I find a point inside J1-J2 space, then another
J one and so on and so on.
1 All of these points (both dots and stars) correspond to a very precise

x, 𝑥̅ = {

that brings to a specify value of cost vector.
If the aim is to minimize, the idea is to arrive in the origin, where
there is the so-called utopian point, the one that minimizes both the
functions, which is impossible to reach, so we need to compromise.
J Whenever we have a solution, look if with respect to the former
2 there’s any solution in the space that is better that this one, at least
for one of the two cost functions, if it is not the minimum reachable both for J1 and J2, rise
the value. Doing this for each iteration, the points are connected (if the space is not close,
we cannot connect, it’s just a dotted space), and so we have a
front. If I move from A to B I have better J2 and worse J2 and
the other way round. Then for each iteration in my simulation I
try to push the front as close as possible to utopian point.

In these points I still haven’t judge the functions I’m comparing,


min I haven’t defined what function I prefer, so the weights w
distance
haven’t been defined yet, somehow, I’m mapping all the w
values from 0 to 1. On the top left we have w1=1, w2=0 (I
privilege J1 minimization) and the contrary on bottom right.
In the middle we have the mean, so the one that is nearest, the minimum distance between
zero-zero, so this is the best compromise: to weight identically the two aspects.
All over we have the whole distribution of weights, so there is always the possibility to pick
up different alternatives ans start studying one or another, in this way there is already a pre-
selected restricted subset of optimal solutions to be judged accordingly to engineer
sensibility.
This is the way to work classically: restrict the domain, be sure it’s the best.
This is a loop, so if where I arrived is not satisfying, I can change, removing or adding a
criterion.

Trade process example


Manned mission to the moon, start from selecting whether to launch the cargo and the crew
together or separately, then depending on this select the launcher. These are only the alternatives,
the criteria is up to us, so besides this there will be the criteria (overall cost or mass, the risk… these
are the J elements).
In the separated case we may integrate them, so classically we rapidly arrive to an amount of
unmanageable alternatives (this shouldn’t be the case in which we arrive, because we are doing
reverse engineering, so we want to find alternatives near the real one and explain why were not
selected), in design phase (so not reverse engineering), so at a certain level with 5/6 alternatives we
start removing what is not feasible in terms of quantities of comparisons. So is up to a good engineer
decide where to go or stop or what branch select.
For the design of a rover, at root the alternatives are less, but then the size, the level of autonomy,
the way to move (leg or wheel) become too many, so we must stop before. Think clever in selecting.

Some examples on why to trade off the system, on the left what we can ask ourselves to develop
the system, on the right operate the systems. The ones underlined are the most interesting for the
discussion.

Development related:

• Firstly, we can decide on buy something already on shelf or customize build it. Define and
find any J function that can be used to make the decision on that or any driver to select one
with respect to the others.
When we select a criterion, think always on concurring criterion, for example if we have the
ToF, the dv of course plays against the ToF, so we must balance all the aspects, because if all
of them are concurrent, then we will be almost monotonic in our optimization function with
respect to variables we have.
• In onboard autonomy, there is any drivers that can help understanding whether is better to
have a processing directly on board or just download them as they are? Why not sending
rough data? Maybe because we have few visibility windows, so I cannot overload my
onboard memory while doing nothing before another measurement, so better to enhance
onboard capability instead of having large onboard memory. Few windows can be for orbital
mechanics or for the cost (we cannot have a lot of stations) or for tumbling, they may need
to be used immediately. We shall compare the cost of new configurable onboard processors
for high data processing with respect to have more operators, more ground stations and so
on, then comparing according to the same J, we select which one of the branches and rise
the other roughly at this level.
• The level of robustness and redundancy wanted on board: we have different elements for
the same functionality, the camera can become a star tracker, the PC working on ADCS can
be the backup of the main onboard PC, the ground station for the science selected in a S-
band can become the backup for telemetry data downloaded in the same frequency. It’s
redundant in the design, understand if needed, in this case we will have less ground stations
(less cost, less complexity in continuously pointing with attitude at each station).
Redundancy can reduce net on ground.
• Level of test is strongly related with risk and reliability, so what test? The whole system or
only subsystems? What if we break something while testing in flight mode? Shall we leave
the structure deformed or build a model for the structural test and then redefine the design?
These alternatives will affect the engineering of course.
• Life cycle approaches: how we want to manage the project? With V diagram, with
incremental or sequential?
Operations related:

• Imagine that we want to build a constellation like Sentinel, based on what would we
compare? Do I build satellite that will survive for a certain time and then replace it, or
satellites that can survive all the mission? Which kind of quantities to compare? It’s
immediate, we know the launch cost depends on mass, depending on the altitude of orbit
we may need more or less propellant for the perturbations. The idea is to understand among
all alternatives which quantities consider which involve orbital mechanics, attitude control,
propulsion, launcher, so at the end we can say what is the best alternative (change satellites
or definitive ones that).
• Controlled versus uncontrolled reentry means the flight path angle at which we reenter the
atmosphere, so we reduce the ellipse at the ground. That could depend on regulation and
be a requirement or can depend on the vehicle size.
This list is to stimulate thinking.

Just focus on this table, the trade-off of alternatives happens at every stage of lifecycle.
It exists in the implementation (selecting the supplier, the final component, the final launcher), in
operational phase (select or adjust orbit), during flight dynamic (telemetry, we don’t have the result
we want, what to do? Come back on nominal or okay to stay)).
Ex. Artemis mission (GEO orbit): Artemis was one of the first mission having electric propulsion
onboard, for station keeping, not for orbital control.
It was launched into a GEO transfer orbit, but the pericenter at which was released due to error was
a bit lower. At that point it had still to be transferred into a GEO, but under a constraint on fuel that
cannot be refilled, so what to do? Consider two aspects: at that point it probably was the equator
because it was targeting a GEO, here the effect of the Moon is out of plane, a gyroscopic effect is
expected with a variation of inclination. Exploiting this effect, it must be considered the price to pay
for not being in zero plane: loss of stationarity with respect to ground, loss of the point with which
the satellite wanted to communicate with, loss of the wanted slice of latitude to be covered.
Additionally, keep in mind the satellite is GEO, so quite a long lifetime (10 years), but a bunch of
years will be spent to increase energy, so for service at the wanted altitude there will be a small
amount of time. Another problem is the Van Hallen belt, two regions highly ionized, not nice for
semiconductor of electronics onboard, in this way the systems will age rapidly, we have to avoid
staying there.
Another idea is to use the propellant sized to reach final orbit, in this way the final orbit will be
lower, partially losing the scope of mission (not GEO anymore).
The decision was to use the thrusters for station keeping and attitude control (the electric thrusters,
so not designed for orbital transfers), and to couple them to the fuel for the orbital change, so they
did what they could with chemical ones to exit the high core of the orbit, then start spiraling with
the electrical ones to reach a little bit lower orbit sacrificing the lifetime of the system because, but
they reached an almost fixed orbit (relatively to Earth). This is a trade-off on orbit to solve a failure
of the launcher.

Drivers are in-between criteria and requirements. Among the requirements there are the so-called
drivers which lead either completely or partially the system design process being a bunch of
mandatory requirements highly constrained and greatly influencing the decision-making process at
any level of the project.
Ex. We have a mission for risk management on the Earth. One driver is the time resolution and the
coverage, so to visit the whole Earth continuously and frequently. It is both a requirement and the
primary aspect to think about when judging the alternatives. Mission for Mercury -> driver: thermal
control. Pluto-> driver: thermal and power supply and telecoms (the distance is the driver). These
aspects come out from functionalities (and are not necessarily functionalities) and help building the
alternatives.
This bubble is basically the same as V-bar: we start from top-left and arrive at top right, going firstly
top-down, then bottom-up.

• Top-down: objectives and functionalities analysis. We arrive at the very bottom left with
fixed requirements and an idea of a solution. When we arrive to a potential design, we trade
the alternatives and select one of those with respect to a J vector.
• Bottom-up: Now we can start building, not in charge of us.

The steps to put in the design are:


• To acquire or to make.

• To do the functional test

• To assemble it
• To go on with the actual tests from the single equipment up to the full product.
Ex. I have to do something. Will I buy or build it? A classical aspect the agency asks in design phase
is to compare the costs. Is there anything that can be reused? Usually this is mostly for payloads, we
build a mimic of the flight model to do the test and then the flight model, the other model is kept
on ground but is still available, so if by chance there is a mission that could do similar analysis it can
be reused.
It was the case for example of a lot of payloads, as GIADA (spectrometer to analyze the particles of
comet), on ROSETTA, they had two models, the one on ground was reused for DAWN mission (on
Cereres, the largest asteroid in the main belt). It was robust because yet used.
Verification: at requirement level, we shall select how to verify (test, analysis), that means choosing
the activities to do to verify the requirements. Is the system answering the requirements? It focuses
on demonstrating that the system meets the requirements.

Verification:
Validation: are the requirements correctly written for the objectives? that is, is the system
respecting the requirements correctly designed to respect the objectives? It focuses on
demonstrating that requirements meet the objectives.
Ex. We have put a requirement on accuracy of pointing, then an attitude determination and control
subsystem is built to be capable to stay in this accuracy, so the requirement is satisfied ->
Verification okay. We have a radiometer onboard, so if the objective is for example to analyze the
ionized ionosphere for a planet, this asks an accuracy of few degrees, so what we did in
requirements made no sense for mission objectives -> Validation no.

This is one of the level of alternatives that we have on the upper right leg of the V bar.
Here we have a list of alternatives for each system component: from a very low level, with
breadboards, that are just an example with not necessarily the material that are going to be used in
space, but with same mechanical interface and shape as the final one, to the flight unit.
Moving top-down, the implementation is increasing in manufacture complexity, is more delicate in
term of cleanliness, is increasing in being complying with requirements about the space
environment. Usually, when something is new and we don’t know how it will behave, we focus on
the breadboards.

At a high level there are the qualification and the flight unit that are points of critical decision.
Qualification is an element that is identical to the one that is going to flight (in m aterial, in
performance and so on), but it will never fly because it is stressed too much in the tests. The flight
unit will do the same tests as the qualification unit, but with lower stresses (specific for the mission)
to check if errors have been done. If the system is then ok, it is ready. What you test or not is up to
the system engineer.
Ex: We have arrived at a component to be flown (a net to collect debris) in a significant environment
(microgravity). We can:

• Do simulations, build models in multibody dynamics for the net with the whole high-fidelity
propagator for environment and then jump directly to build the system to be flown on the
plane (size, material, velocity).
• Pass through a breadboard to an engineering model, that means just building by hand the
net, no matter of shape or material and start throwing it with compressed air systems to
see the evolution of the net to see if it has a functional sense and can work.
In the second option there are not quantitatively significant results, no attention on real parameters,
it is just a functional demonstration to see if it makes sense, if it is okay, keep on going, then
simulation on emulator that can be corrected (for example the g value).
This is what is classically done. An engineering model is good with respect to the properties we want
to reproduce.
Example: we want to verify the correctness of thermal design, so we build structure with same
configuration and correct materials and then put dummy masses representative of material inside
(nor real ones). The system is then put in thermal vacuum chamber for test, so as it is written in the
slide (Engineer Unit) we have high fidelity unit with very a specific aspect of it and then we
demonstrate quantitively the critical aspects.
At the end we have for sure to build the flight unit and it’s up to us the qualification.
Qualification: we have a unit (whole satellite or just a component) undergoing the worst condition
that it might encounter in space for the longest period (loads, temperature), so we stress it to the
worst environmental condition it can encounter, if it survives (so respects the final requirements for
test, no criticalities in T and so on), the system is space qualified, so it’s okay to flight. The
qualification unit doesn’t fly because it was stressed a lot, we take another identical unit, and put it
into real environment it will encounter (Flight unit).

Do the test only for real case, if okay sent to fly.


Between qualification and flight unit there could be the protoflight, usually when we are in a hurry,
or we have low budget (ex. CubeSats or rapid design). The loads are applied as qualification but for
a period that is the one for acceptance, in this way it is stressed for the maximum load but not for
maximum time (for the real one). It’s a compromise.

This is done at level of design by the system engineer when the alternative trade-off is on.
Example: we see all subsystems onboard (payload included) and the kind of models we consider:

• Breadboard model just for functional analysis


• The virtual emulator
• The EM/suitcase (the last is for TTMTC), which is strictly representative of flight unit but in
each case, so this means I will play with the identical board only, I will play with the identical
unit for ADCS and TTMTC.
• Protoflight model (PFM): for all of them, to shorten time and reduce cost.
Note: when working with EM we are not required to stay in a clean room (large advantage). STM is
the EM just for thermal and mechanical.

This matrix shall be filled by reasonings as an output of trading off depending on mission.
Then I correlate the chosen models with respect to the test I want to do.
We have subsystems, increasing accuracy on models, logic on test to do. For attitude it makes sense
to do functional tests on electronics before start working and connecting the functional test,
because attitude is actuated by electronics on board. Check if okay from the visual control, mass
control, the very simple physical verification, then connect with cables and start looking at signals
coming, then connect with real power supply and so on.
This is a logical connection in our decision to put together bricks: What to build? What’s the
reliability? How to correlate?
This is done at phase A because now I know that I have not only to buy at least two elements of it
(engineering and flight models), but I know I must start procuring engineering models now, so well
before flight, models, so I have also the timeline of the costs. (ex. Plan to procure materials on time,
see nowadays issue).

Hermes was an astrophysical mission, so it checked only on electronics and attitude control because
of co-pointing of payloads. The mission was driven by this capability. Stress models here.
For Inspector (this example), the driver of mission is imaging the debris and process the images on
board. Emulator also for electronics, image processing and control of platform (pointing of thermal
and visual camera).

Introduction to Relative Dynamics and Trajectory Design


Relative dynamics still belongs to the orbital mechanics domain and is used to describe the motion
that happens between 2 bodies orbiting a common attractor. Since the satellites are very close, is
no longer convenient to use their absolute representation with respect to the reference frame
attached to the planet (very broad domain) → we move the description of the motion very close to
the bodies that are moving.
10.1 Reference frames
Since we have to describe the motion of a s/c in the close proximity of a target, we shall define a
reference frame attached to the orbiting body (target) and centered in it. The fact that this is a
moving (accelerated) frame means that it follows a circular trajectory.
OSS: The frame is non inertial → we shall also consider Coriolis forces, Euler forces, etc.
There is no best reference frame to use when describing the relative dynamics → the 2 most
common reference frames are:

1) RVD → used in rendezvous and docking:


- x-axis (V-bar) completes the frame (it points toward the
velocity → true only if circular orbit)
- z-axis (R-bar) aligned to the radial vector pointing towards the
Earth
- y-axis (H-bar) aligned with angular momentum

2) FF → used in formation flying missions:


- x axis points out in the direction of radial position,
- y-axis (V-bar) completes the right-handed reference frame
- z-axis is aligned with angular momentum

10.2 Derivation of cartesian models


To derive the cartesian model we can start from absolute non-linear model and write the relative
one by introducing a linearization (Taylor expansion). To handle the problem with a simpler
approach we can introduce an extra assumption: nearly circular orbit => I obtain differential
equations that can be easily solved.
Note that the following results are analytic solutions valid in a very restricted domain → this model
can be applied with a good level of validity when we are on a nearly circular orbit and close to the
target (distance with respect to the target is much smaller than the radius of the orbit). Therefore,
we need to be careful with the domain of the model.

10.3 Clohessy-Wiltshire equations


If we introduce the linearization and the early-circular orbits assumption, then we obtain the
Clohessy-Wiltshire equations, which in the 2 reference frames are:
RVD frame FF frame

We can observe that the equations are exactly the same (the only difference is in the symbols): one
equation (out-of-plane component) is decoupled from the other 2 (in-plane components) → y
component in RVD frame, z in FF frame.

The forced solution to the CW equation in FF frame is:

The question is: how can we select the initial conditions in order to get some peculiar geometry
(closed orbits, straight motion, stable points etc.)?
EXAMPLE: In case of rendezvous, we want to design some trajectories around the target
characterized by a minimum distance and a relative period (time that takes to go around the
target).
In the solution we observe that we have all periodic terms except for the 2 terms that show a secular
drift: 𝑛𝑡 and 3𝑛𝑡 => (sin(𝑛𝑡) − 𝑛𝑡) is a combination of sinusoidal term and a straight line (with a
negative coefficient) that diverges from the axis unbounded; therefore, as the time grows this term
just increases.

It is a bit trivial to understand how to proceed:

- If we want to get into a close orbit, we need to be able to eliminate that term → if I set 𝑛𝑡 =
0 then the other equations will be just a combination of sinusoidal and cosinusoidal
functions => sinusoidal (periodic) expressions
- The only drifting component will always be the y direction (along-track direction ⁓ velocity
axis).
Let’s think about the geometry: we have the Earth and a satellite that follows its reference orbit. If
I have 2 s/c in the close vicinity, the only motion which is drifting away is the one along track → it’s
a line (because I’ve linearized) aligned with velocity vector.
OSS: It never happens that there is an orbital motion in which the second spacecraft drifts in radial
direction because it will be in a different orbit and the only thing that it can do is just following the
orbit (it is not feasible that the s/c follows a path that unboundedly leaves the reference frame along
the x direction).
If we look at the solution, the z component is actually behaving as a harmonic oscillator => whatever
the in-plane coordinates are, the z components will just oscillate across the plane harmonically.
Imagine to have a spacecraft which follows an absolute orbit
(z aligned to angular momentum vector); it can follow a
similar orbit that crosses the first orbit and arrives at the
periapsis. The z component describes the position of point A
with respect to the position of point B.
In LVLH (relative frame) the motion of the point will be
harmonic across the origin → trivial to understand. If we
consider the spacecraft along the relative orbit, its cross-
track distance will increase from 0 to a certain value, it goes
back to zero and then it increases again but in negative
direction => same orbital dynamics but in different perspective.
We have seen that the trajectory is a combination of sine and cosine which have the same argument
1
(𝑛𝑡) => the relative motion has the same frequency equal to 𝑛 , where 𝑛 is the mean motion of the
system. As a consequence, what you get is a periodic motion with period equal to the orbital period
of the reference orbit.
The time that takes from A to B is half of the period (always
integer fraction of the orbital period) => if we want to design
natural relative orbit, we know that the period will always be
a certain ratio between the reference period and an integer
number. Especially, in bounded orbits we get to the same point
after 1 period (we can’t do an inspection trajectory in a certain
time different from 1 period unless we use thrusters → no
more natural motion).
Let’s now generalize: the relative dynamics expressed by
linearized equations, in general, leads to a geometry of the
motion which is an ellipse with possibly variyng center (if there
is drift the ellips moves its center in time).

Let’s see the mathematical proof that leads to this idea; we can write that:
𝑑𝑥 = 𝐶1 + 𝐶4 sin(𝑛𝑡) − 𝐶3 cos(𝑛𝑡) (1)
3
𝑑𝑦 = 𝐶2 − 𝐶1 (𝑛𝑡) + 𝑐𝐶3 sin(𝑛𝑡) + 𝐶4 cos(𝑛𝑡) ( 2)
2
Where 𝐶𝑖 are just constant terms that represent a combination of the initial conditions:
2𝛿𝑦0̇ 2𝛿𝑦0̇
𝐶1 = 4𝛿𝑥 0 + 𝐶2 = 𝛿𝑦0 −
𝑛 𝑛
2𝛿𝑦0̇ 𝛿𝑥 0̇
𝐶3 = 3𝛿𝑥 0 + 𝐶4 =
𝑛 𝑛
If we compute, (1)2 + (2)2 then we will obtain:
2 2
3 3
(𝛿𝑦 − 𝐶2 + 2 𝐶1 (𝑛𝑡)) (𝛿𝑦 − 𝐶2 + 2 𝐶1 (𝑛𝑡))
(𝛿𝑥 − 𝐶1 )2
(𝛿𝑥 − 𝐶1 )2 + = 𝐶32 + 𝐶42 → +
4 𝐶32 + 𝐶42 4(𝐶32 + 𝐶42 )
=1

Whenever 𝐶32 + 𝐶42 ≠ 0 we can recast the solution of the differential equation into this form →
canonical form of an ellipse with axes equal to (𝐶32 + 𝐶42 ) and 4(𝐶32 + 𝐶42 ).
OSS: In general, the geometry is an ellipse with moving center → the shift is given by a constant
3
contribution (𝐶2 + 𝐶1) + drifting part (due to 𝑛𝑡).
2

OSS: CW is a linearized dynamical model expressed in cartesian coordinates (3 axis where 3


coordinates determine the position of a point).
10.4 Matlab simulation

The very first assumption we make was to set 𝐶32 + 𝐶42 ≠ 0 which is verified if and only if 𝐶32 = 𝐶42 =
0. If we give some random condition (but we impose that these terms are zero) then we would
obtain something like the figure below.
This is actually the situation where we have 2 coplanar-concentric orbits, and we express the motion
of the outer one with respect to the lower one. The only thing that happens is that the internal orbit
goes faster (shorter period) => if we are on the smaller orbit, we will see the spacecraft on the other
orbit lagging behind. In reality you would see a curved line (because of the orbit) but in the linearized
world we don’t have a curvilinear reference frame, so the motion results in a straight line.
OSS: We can’t take these relative dynamics to be valid across the
entire domain → if the distance between the two is for example
10000 km, then this straight line is no longer valid (s/c1 will see s/c2
as in point B but this is not true).
In the second simulation we want to eliminate the drifting
3
component → 𝐶1 (𝑛𝑡) must tend to zero. If this happens what I
2
get, in general, is a stable planar orbit.

If, for instance we put a cross-track component, we would expect to see that the planar projection
of the motion is unchanged, but we just introduce a harmonic oscillation along z (out-of-plane
component):
The third simulation is representative of the most general case: if we put a random initial condition
what I get is a drifting trajectory (around y-axis):

This kind of trajectories (cycloids) are used to naturally approach or get distant from a target → they
are useful, but they need to be controlled.
10.5 Static points
A very important concept in relative dynamics is that there is a position in which the satellite could
stay static with respect to the target. This locus of point (in linearized model it is a straight line)
coincides with the along-track points (where relative velocity is zero) → if I put my s/c into an initial
condition in which I only have a shift in the along-track position, then that’s the only set of stable
point in which I can stay static.
Generally speaking, when we have a mission in which there is a proximity operation phase, typically
we use different kind of sensors and actuators (especially with GNC demonstrations) to find the
relative position between the 2 satellite. These points are very much used because are static and
typically represent the transition between the absolute phase (when we do the Hohmann transfer
and other maneuvers) and the proximity phase (after maneuver we reach a certain point in which
we want to be static for a moment and we commission all the sensors before starting the proximity
operations).

OSS: The selection of this points is typically influenced by the type of sensors we have.
EXAMPLE: if we commission a camera for relative navigation, we must be sure that we are
using an holding point in which our camera sees the target (if we have a large FoV and we
are very far, maybe the target might be half of pixel → hard to commission; if we are too
close and we have a narrow FoV, the commission will require a very precise attitude control
which maybe cannot be guaranteed in the first phase) => the choice of these points is a
system level decision.

10.6 Bi-impulsive maneuver


Bi-impulsive maneuver is the easiest way to design a relative transfer.
Imagine we are at point A and we want to rendezvous with a certain target (target is the origin of
the LVLH frame); the simplest maneuver can be designed using impulses (thrusters) and introducing

the state transition matrix. This matrix which is used to transfer the solution of CW equations into
matrix form, and given a certain set of initial conditions, we can multiply this initial state vector with
state transition matrix to get the final position at a given instant in time.

We have the state transition matrix and we know where we want to be at the end (final state),
therefore we just need to determine, at the initial position, what would be the velocity we need in
order to enter a natural evolution that leads to the final point.
In general, we have 2 ∆v: one is the start of the maneuver and the second one is used at the end of
the maneuver (when we reach the final condition, we want to cancel our velocity). The only term
that we know is the initial velocity before the maneuver and the velocity after the maneuver (at the
very end).
{Δ𝑣0 } = {Δ𝑣0+ } − {Δ𝑣0− }

{Δ𝑣𝑓 } = {Δ𝑣𝑓+ } − {Δ𝑣𝑓− }

If I invert the relation, we can obtain the following expression:


−1
{Δ𝑣0+ } = −[Φ𝑟𝑣 (𝑡𝑓 )] [Φrr (𝑡𝑓 )]{δ𝑟0 }

Once we know the initial velocity together with the impulse, we can compute the final velocity at
the target point. Then, the second impulse is the one needed to drive this velocity to 0 => final state
has zero velocity and position equal to the origin of the frame (all components are zero).
10.7 Relative orbital elements

In orbital mechanics we have 2 different parametrizations of the orbital motion:

- Inertial state in the Earth Centered Equatorial Inertial reference frame (ECEI)
- Orbital elements → set of 6 variables that uniquely define the orbit
We introduce a similar parametrization also to describe the relative orbit using a combination of
orbital elements. In literature there are multiple possibilities but one of the most common is based
on the following representation:

Imagine having a set of orbital elements combined in a different way (combination of “standard”
orbital elements). If we take a combination (that resemble a difference) of this orbital element of
the 2 absolute orbits we are working with (we want to express the relative motion between the two
s/c), the resulting set of variables are called relative orbital elements:
OSS: Conceptually we can think at the right part as the difference between the orbital elements,
where the additional terms are placed there for stability and singularity reasons of the orbital
elements.
In the absolute frame we had 𝑎, 𝑒, 𝑖, Ω, 𝜃, 𝜔; out of these 6 variables, the only one changing in
natural motion (without maneuvers) is the true anomaly (the motion is expressed only by the
evolution of this single term, the other 5 are used to draw the geometry of the orbit) → very
different from cartesian representation (both position and velocity vectors change in time)
Therefore, the power of this parametrization (relative orbital elements) is that, with Keplerian
motion (unperturbed motion), out of 6 terms the only one affected by the motion is 𝜆 (only one
linked to the mean anomaly). Therefore, instead of having this sinusoidal and cosinusoidal evolution
of the motion, we have a vector that defines the geometry of the relative orbit in which just one
component evolves along time (this comes purely from Keplerian dynamics).
OSS: We are not increasing the complexity; we are just expressing CW equations in a different way
→ the set of differential equations become simpler except for the fact that we have a relative drift
(increasing ∆𝜆) if and only if we have a difference in semi-major axis:

=> the only change in time that I can get is in ∆𝜆 (Δ𝑢̇ ) and its due to the difference in semi major
axis (exactly what we obtained before in the “straight line” example → 2 concentric orbits with a
difference in 𝑎).
This is much easier to handle with respect to all the discussion we have made in the cartesian
domain (easier to understand the trigger behind all of this).
OSS: If we have no difference in 𝑎 we will get no drift => instead of writing the ellipse ect., we just
3
have to make sure that the term − 2 𝑛𝛿𝑎 goes to zero, and for the rest we can put whatever we
want because it will be enough to guarantee a close orbit.
The geometry of the relative orbit (for any instant of time is an ellipse whose center can vary in
time) is given by the norm of the relative eccentricity vector (from center to periapsis) and we can
notice that there is a constant ratio between the y amplitude of the ellipse and the x component of
the ellipse (ratio =2) → we have already seen it in the equation of the ellipse (there is a factor of 2
between the axis in x and y directions).
OSS: If we have a difference in semi major axis
means that my orbit is shifting toward the radial
direction → we know that if there is a difference
in 𝑎 this will not be the resulting motion
throughout time because the ∆𝑎 imposes a drift
in the ∆𝜆 => what we get is that in time the
center of the ellipse drift toward right → the
resulting trajectory is actually the one represented in the figure on the side.
This spiraling motion is an envelope of moving ellipses.
Out of plane we have a harmonic oscillations; if we project it in the r-x plane
we get an ellipse in which the shift in the center is still given by ∆𝑎 and the
amplitude of this ellipse is given by the relative inclination vector.
Beside of the drifting behavior, we can immediately picture the relative
orbit and we know what’s the distance between the point orbiting the
relative orbit and the center just by looking at the norm at the relative inclination and relative
eccentricity.
OSS: All these models have been extended in order to include the effect of perturbations.

10.8 Passive safety


There are some smart ways to design relative trajectories and ensure some kind of characteristics
to the motion; for instance, one important feature is passive safety. It is the philosophy that drives
a design in which the natural motion inherently guarantees that there are no collisions and
criticalities if the control system does not work → we insert into passive safe trajectories and even
if we can’t control and navigate the spacecraft anymore, we are on an orbit that won’t collide with
the target.
We can easily impose the passive safety by reasoning
on ∆𝑒 and ∆𝑖 vectors (“vectors” because in relative
orbital elements we have 𝛿𝑒𝑥, 𝛿𝑒𝑦 , 𝛿𝑖 𝑥 and 𝛿𝑖 𝑦). This
is achieved by imposing that these 2 vectors parallel,
which means that whenever the s/c crosses the
orbital plane of the target (cross-track distance from
the target vanishes), it is at its maximum distance
along the radial path. If we put the two vectors
perpendicular, we obtain the opposite effect
(whenever we cross the target orbit we find the minimum of the radial distance).
OSS: From the dynamics and energy conditions we can see that if we put Δ𝑎 = 0 we obtain a close
orbit → 2 points will stay in stable relative orbit if their absolute energy, which for a generic orbit is
𝜇
expressed by 𝜀 = − , is equivalent (energy matching condition) => we need to stick to the same
2𝑎
major axis.
10.9 Cartesian-ROE mapping

There is a way to go from relative orbital elements to cartesian coordinates. Let’s define:

Where Δ𝑂𝐸 = [𝛥𝑎, 𝛥𝑀, 𝛥𝜔, 𝛥𝑒, 𝛥𝑖, 𝛥𝛺]. The first-order approximation of the mapping between the
Hill state and the classical osculating orbital elements yields:

31/03/22
“Today I would have started with environment” lol

PREMINARY SIZING AND MARGIN


You are not asked to size the launcher. The launcher is a service for you, it’s an interface. So, you
just look at what a launcher could give you. In the general design, phase 0, phase A… you look at
catalogue of launcher, they offer you different alternatives, is that the launchable mass, the energy,
the envelope that is the physical volume in which you can insert your spacecraft, is that because of
uncertainties in release in the state vector, the cost, the schedule, the launch base with respect to
latitude and longitude, that’s it. You are paying. Someone else is sizing. Do not size stages of
launcher, how many they are, booster in parallel… no.
When you do mission design and then mission engineering this is not asked. If the objective of design
is let’s design and then implement a new family of reusable launch vehicle or for the Ariane class,
at that point yes. This is not the case.
What we have to do: “We identified the mission selected that kind of launcher. For us, the reason
way was…, maybe the only one available nearby the ecliptic plane, so to have the maximum relative
velocity with respect to the Earth rotation to be exploited, or because it was the only one with that
volume available and we are a very big spacecraft…” then justification because of that selection.
From spacecraft engineering point of view, not launcher engineering point of view.
In the reverse engineering she expect us to use this kind of information in this slide as a path to
compare ??????. internet, recupera con registrazione
Functional analysis, start identifying the ConOps, the alternatives, for example for architecture. Now
let’s see with respect to mass, power. Where we are and where are the different alternatives we’ve
put in place, in terms of kind of spacecraft, number of spacecrafts, way of launch the spacecraft.
When you don’t have nothing in your hand start from mass evaluation. Mass is always a criterion
that is good to have in your hand, and you can quantify quite easily.

Way through:
1. Determine the rough mass for the imposed payload
What we have in hand: payload. Are they scientific, technological it doesn’t matter. But is the input,
from costumers (scientists or agency whoever). As soon as you have your objectives the first rational
you look at is why they mounted those kinds of payloads to answer those kinds of objectives and
functionalities from science or technology point of view.

2. Determine the mission class and, accordingly, the on-orbit dry mass from statistical data
You start from the mass of payload you have on board, as a whole. That could be related with sort
of density of spacecraft. So statistically speaking, depending on the kind of mission you have, it is
possible to correlate the mass of the payload with the overall dry mass of the spacecraft. So, this is
the first step. I look at missions similar with respect to objectives, regions visited or similar because
of the same costumers, I do my statistical regression and find a correlation between payload and
dry mass. Why dry mass and not wet mass? Dry mass is with no fuel at all (is that for attitude or
orbital maneuver), is just the skeleton, no fuel in. In this way you can nicely and credibly compare
satellites, then when you open the door and which kind of maneuvers you do, which kind of strategy
I select for maneuvering (gravity assist, perturbation exploitation, low thrust…) you enter in a
domain that is critical in terms of taxonomy. Better to limit the comparison and the regression only
to dry mass.
3. Determine the total allowable on-orbit mass from current launchers
Then, step 3 can be done not necessarily at this point but must be done. At least to start having an
idea in our reverse engineering. Look at the launcher in terms of performance. Launcher manual,
there is a chapter “performance”. Charts that report correlation with mission analysis in terms of
initial state vector, so orbit injection or energy, with respect to the mass that can be released, so
the so called “payload mass” in the launcher, not the payload mass in the spacecraft. So, the overall
mass you want to launch with respect to the capability of the launcher in terms of delta energy. So,
you look at those so that you can understand which alternatives or flexibility you have in terms of
allowed launchable mass on one side and dynamics, so level of energy that launcher can put you in,
is that a closed orbit or open orbit the discussion is the same.
4. Deduct launch vehicle adapter mass from the launch mass
Then you shall keep in mind that interface between you as a spacecraft and the launcher is
mechanical interface (so typically a truncated cone, but now things are changing because there are
multiple satellites launched at the same time). That is a service mass, belonging to the launcher,
which allows you to connect your spacecraft to the launcher. This is part of the launchable mass
somehow because it’s part of dead mass. You have to consider which adapter to use. Section at the
end of launcher manual for mechanical interface in which you have alternatives in terms of
adapters, depending on the size of the spacecraft, the mass of the spacecraft, the kind of launch
(single launch/ dual launch), you have a bit of alternatives. You can make your decision on which
alternatives to keep in term of mass.
5. Determine the propellants and pressurants required for the mission
Now jump from dry mass to wet mass, so an idea on the fuel to have on board. At this stage, at the
very beginning, with no decision about “I’ll stay on low thrust, I’ll use high thrust, so chemical
impulses solution” who cares. Focus on fuel mass. If you look at the fuel mass is the delta v-s and
the inputs, if you think of the simplified integration of the dot mass versus time equation for the
rockets, so the Tsiolkovsky equation. You can use that formulation, very simple, to manage the delta
energy needed in terms of delta momentum, delta v-s and have an order of magnitude of fuel to be
embarked. This could be done even for electric propulsion because you have the specific impulse
parameter to play with. So just changing that value, still you are in a very rough evaluation, but the
percentage of mass with respect to the overall launchable mass makes sense. Something like the
30% of the whole mass with electrical ??? and 70% if chemical. ??????

6. Verify the on-orbit needed mass and launchable mass consistency


Now you just have to cross check that the launchable mass from the user manual point of view and
the sum dry mass, fuel mass and adapter mass stay well below the mass that your launcher allows
you to manage to get to that orbit, and at this point that’s it.
You have an idea and can say “this launcher makes sense. From this starting point of overall mass, I
can start doing my sizing”. Or start comparing with literature as overall mass of the system.
7. Distribute the consistent gross mass (m0) among the on-board subsystem to impose
preliminary constraints to start sizing
In real life you do that: still with some data from statistic. You have overall mass, you know it exist
a launch that can support and provide that insertion, you start distributing that mass among the
subsystems you have on board, so that each of engineer working on subsystem sizing has a starting
point. So, attitude engineer knows he has no more (for starting) than 50 kg. The thermal engineering
knows he has no more than 4kg. It is a starting, initial guess. At least allow the team to start working
in parallel with real number. Then there will be a loop. But otherwise, if you start working alone as
a single engineer from scratch, everybody does whatever he wants and then the mass of the satellite
explodes. It is like having a procedure top – down (V diagram approach) fixing the boundaries and
then stay in those boundaries to see what happens in sizing.

8. Check for the correct margins at each step of the process


Then:

This is not important for you at this stage, but for your life. It is a starting point. At the end of the
story, when you precisely size your components, the interactions among the subsystems on board,
you can forget about these numbers because you’re closing the uncertainties you have, so you don’t
have to be stuck on those.
If from this analysis the mass of your satellite turns out to be 1 ton and then, after point 7, everybody
starts doing the precise sizing, so simulations, analysis of environment, understanding of real sizing
of thickness of plate for structure, the mass of selected reaction wheel for the momentum you need
and so on.
And then the mass turns out to be instead of 1 ton, 800 kg or on the other side 1200 kg that’s ok.
You just finalized better the design. Have in mind that 1 ton come out from preliminary sizing is not
the law. You do not have to satisfy that number necessarily. You have to satisfy the coherence of
the whole distribution.

In the next slides: one slide for each point.


Step 1-2 Mass versus p/l - statistics
This example of statistic with NASA satellites, but you can build yours by yourself, depending on the
mission. The message is: classified by class of mission. This makes sense. If you have communication
satellites a lot of mass will be connected with the whole telecom subsystem on board. That will not
be the case for an Earth observation satellite, but that might be the case in terms of the importance
of the telecommunication subsystem for an interplanetary satellite. So, they are clustered for having
the density evolution by category. What you correlate is the payload mass as a whole with the dry
mass. Enter from vertical axis, exit with horizontal axis.
If you have a lander or more than one element on, the lander is a payload for the spacecraft. The
scientific payload of the lander is the payload for the lander. That is the payload for the spacecraft.
That is the payload for the launcher. So be critical. Ask yourself who is in this contest the payload,
and so which kind of regression should I run to have the correct correlation.
If you entered here with lander payload and look for mass of lander is a nightmare! Because you
have an elephant, not a lander. And not number in the region you are with a lander, because here
we’re talking about spacecraft, not surface vehicle.
Step 3-4
3. Determine the maximum launch mass for the mission: directly derived from the launcher
capabilities from the launchers’ user manual
Now we have our dry mass. Now jump in launcher manual. Look at performance chapter. You find
graphs. Look at launcher as user, not designer.

Case for closed orbit top left. Case for hyperbola, interplanetary insertion bottom right.
You see altitude.
Of course, when you look at the graphs, they are parameterized with respect to Keplerian
parameter. Important: y axis always launchable mass.

In the manual it should be declared whether the adapter is included or not. ?????
Adapter included or without the adapter? It could be just some tens of kg but it could be a difference
for you! You have the parametrization in terms of inclination. Circular orbit in this case, otherwise
you would have also the semi-major axis and the eccentricity or two apsidal points. Then the altitude
and then monotonical decreasing correlation with respect to mass as expected.
Keep in mind if you read a number over there it isn’t “ok if I want to insert my spacecraft 900km
altitude and 70° inclination, I’m plenty of mass. Because at least up to phase B, up to when who
cannot weight physically your spacecraft, already integrated, you are asked to have a margin of 20%
with respect to launchable mass. Otherwise, they don’t accept you for launch.
Because as ladies, the more you get older the more you get fat (the same for male I would say but
for direct experience I would say just for female).

Spacecraft: the more you build it, the fatter it becomes.


You must respect the margin. This saves your life during the design.

This is only for overall launchable mass, it’s ruled. But valid for every subsystem.
Something similar can be done with respect to interplanetary leg insertion. Same graph reported
with respect to C3, the square or infinitum velocity on the asymptotes and some nice graphs also
give the inclination with respect to equator. You see launchable kilos.
You keep that aside, start building your mass, at the end compare them taking care of having 20%
margin. Otherwise, you change the orbit/ you change the launcher. Either you get slimmer or find
another launcher or a different strategy to be launched. Instead of staying at given inclination and
altitude you could stay lower and then trading off a maneuver with the propulsion that will increase
your mass but that is a balancing in trading off, the least worst condition.
ADAPTER
4. Consider launch vehicle adapter mass from the launch mass, either from statistics or user
manuals:

Classical adapter
This is attached to the basis of the launcher.
This is the interface requirement with the spacecraft if you have
just one spacecraft. It’s an empty cone. Nice to put any nozzle you
might have. You select the face in which you might have
appendages to stay inside. This is the classical single spacecraft
adapter.
Now you can also have this kind of solution, this is a dual launch adapter.
The so-called Vespa for Vega.
This is the first that you launch. The adapter of the top 1000kg maximum
mass. Then this adapter detaches and you have another satellite inside.
That will be released in a second phase of the launch. Larger than the
first one.
If you move in a more complex situation. This is SSMS
(Small Spacecraft Mission Service). You see a mass. You have
satellites over there, over there… looks like room with
forgotten satellites. Differently, it’s a new concept of adapter.
System which could host many classes of satellites, of
different mass and size.
Criticality of this system is that it is not capable to release
them one by one, only all at once. So, then it’s your job to take
care of your satellites. This is the way of exploiting one
launcher for more than two maxima + some CubeSats.
Different philosophy. Still, you see this is the main adapter for
the SSMS infrastructure, that is a structure to release more
than one satellite, so it’s a cluster of adapters for all the 20
satellites that are there.

So, you understand this is a very simple regression that refers to solution like the first adapter for a
single spacecraft. Our missions are basically like this apart the twin satellites.
I know the kind of adapter the launcher offers me, so I have an idea of the mass I can attach on and
I have to keep it in mind. It will be the wet mass. Not the dry of before. Fuel mass in between to be
computed.

She expects us to see numbers agree. We find launched mass. We know the launcher. Break down
wet mass, mission analysis, your delta v, dry mass with respect to payload, adapter with respect to
launcher selected and you have pieces of puzzle. See if it makes sense if there are discrepancies,
why? Because of the model, we are doing roughly, because we’re doing something wrong…

Step 4-5
5. Determine the propellant required for the mission: preliminary propellant mass computation
(chemical propulsion) with the rocket equation:
𝑚0
∆𝑣 = 𝐼𝑠𝑝 𝑔0 𝑙𝑛 ( )
𝑚 𝑑𝑟𝑦

or assuming mprop=60-70%mo- 🡪 1,5-2,3md


Then there is the piece in between. That is the fuel. At this level if you run already comparison with
respect to mission analysis at least the big maneuvers. Ok you can use those. Or, even with electric
propulsion (it’s just a matter of using impulse that stays in thousands of seconds, not hundreds) it’s
ok at this point.

6. Verify the total allowable on-orbit dry mass:


Dry mass+p/l+propellant+LVA+ margins < LM

LVA=launch vehicle adapter


If the left member is greater than the right member either a different launcher shall be selected or
a decrease of any of the left terms shall be imposed but the margin)
At this point you are in the condition you can compare this inequality. You have your dry mass, your
payload mass, you roughly understand the amount of fuel needed and you can compare with
respect to what is reported in literature, you have the adapter of the launcher, you can presume
which is the part of mass on that, and then where is the part of next topic (margin)

All that shall be lower than launchable mass you find in the user manual.
Delta shall be, there is a standard. Not an invented number. 20% at system level.

Let’s keep on going.

Technology Readiness Level and Margin philosophy


Chart to recall TRL concept. At this point margins you put in the design are strongly related with TRL
of technology you select.
Margins stay there for two reasons:
- the first is the uncertainties on known variables. But that you still have to size with precision
to be robust with respect to preliminary analysis
- the other point is, from technological point of view, the uncertainty you have in terms of the
real performance of that technology because it’s still in development. So, margins will
squeeze or enlarge depending on if you are at the bottom or the very beginning of the scale
for TRL. You cannot select a conceptual technology and then put 5% on presumed mass and
power requirement. You shall put 20% or even 100%.
It's something you put in the requirements. “The design of the whole mission shall follow the
standards of margin philosophy” and then you enter in detail.
What does it mean?
Mass margin’s philosophy – NASA approach

This is the case of NASA, reported in books. Online you have materials, standards of ESA for
preliminary sizing.
This matrix reports the class in terms of mass of your satellite and class in terms of project lifecycle.
Am I at conceptual level (phase 0)? Phase A PDR? Phase B CDR? Phase C PRR already almost to
launch?

It means how mature I am in developing, modelling, integrating, implementing the satellite.

Inside you have the kind of mission you’re doing. Is it recurrent mission? Completely new concept?
You see what happens to margin the more uncertain and newer you are, the higher they are. And
you arrive column at the end of the day before launching. Zero margin at system level makes sense.
Because you put your single subsystems and then integrated subsystem on weighting instrument.
Then you have your precise mass with uncertainty depending on the instrument you’re using, but
basically correct in that and don’t need any margin.
This doesn’t happen with the power. You know the power your subsystem can provide. But you’re
not so sure about the environment you’re going to encounter. So, still at launch, with respect to
resources like power or data buffer on board you still keep a bunch of margins for robustness.
Do not assume once the satellite is built everything is known. Because you still have to enter the
environment. And not any aspect of environment so well-known so you can close any kind of margin
and safety.
Mass preliminary breakdown
7. Allocate mass percentage for each s/s: preliminary mass percentage distribution (as % of DRY
MASS)
Now you have built mass as a whole. You know which kind of margin to put on those value. Now
we’re ready for the sizing.
Still starting point. In which you see classification with respect to category of the satellite, you see
in the rows the subsystems you have on board. It’s reported the percentage of whole mass to
dedicate to each subsystem. So, each one can start sizing. Knowing structure shall not increase the
more with respect to 20%, attitude 7%... so everybody starts with a boundary. If the boundary is
overcome, it is the instant you advice the others and give a note. I can size that for requirements,
the needs, but I need 1,2,10 % more of mass. Who is selling me that mass? Who is getting slimmer?
Or team leader states it’s impossible to get slimmer, so the boundary with respect to overall mass
shall be increased. Distribution starts again. At some point you arrive at convergence at together.
Not with no coherent spikes. Not so simple.

2 columns: because in some cases you size payload as well. So, part of sizing is the payload include.
For example, telecom satellites you might be asked to size the whole antennas, transceivers, so on…
so the whole. Or you have technological demo for example docking ports and you have to size also
that.
Or you’re given. Of course, you shall add the mass of the payload afterwards but there is someone
that is giving you payload with all requirements, budgets, parameters, so you don’t have to split the
service module dry mass also with the payload.

Power preliminary allocation


Something similar you do with the power. The two main budgets are mass and power.
Scheme is the same. Regression as we did for the mass. Enter with power demand of payload
(classical information that is easy to have). When we’ll face EPS sizing, power demand will change
depending on phase and mode, differently from mass. If you’re doing a transfer to final orbit, apart
special case, it is obvious you’ll have all your instrument off. Then when you’re one science or
technological demonstration mode all those instruments will be on. Power demand will be
completely different. Power budget relates with modes and phases depending on what is on and
off. You won’t have a power demand as a whole, single number as the mass.
You see from payload power input, the focus is on the phase/mode in which you have the maximum
power demand, classically it is when you have payload on. Again, this is the starting point, not the
arrival.

Power margins philosophy-NASA example


Same scheme as before. Depending on the level of cycle as before, distribution in terms of power
level and the margins to be adopted.
Power preliminary breakdown
Again, break down in the subsystems. Depending on the class with respect to overall power
identified statistically or by similarity, the distribution of power.

For communication you see zero just because in the Comsats communication is the payload. So is
not the case you do not have power demand. You’re not duplicating, because that power is already
in the payload.

Do by yourself, look at the numbers.

Comments:
You see for Earth observation (MeteoSat is typically Earth observation) and planetary mechanism is
zero. But not a law. Mechanism means robotics for reorienting antennas, opening solar panels, to
deploy robotic arm (not the case here) or whatever… but not a rule. It could be better to start with
no kind of this mechanism that could decrease the robustness and increase the risk but then, going
on in the design up to you to say “ok, it’s impossible to orient the antenna, the payload, the Sun all
together”. So, I relax a degree of freedom, insert a hinge for example, so rotational degree of
freedom, but I have to communicate to everyone I’ll ask a percentage of power. Someone will have
to give me power. Just a matter of talking a lot with others, no having fixed rules.
Reversed engineering: subsystem by subsystem check if distribution makes sense. If yes why? If no,
the rationale behind?
48% very high-power demand with respect to thermal.
Classically you try to stay passive with thermal radiation. So, plenty of power to be given to others.
So, looking at the design of spacecraft you can discuss and motivate and justify why you don’t have
those numbers, because it’s easy to protect just with right material and correct optical properties
your satellite with respect to the temperature.

ESA - Margin philosophy at phase 0/A


Highlight top-right: document online. Visit it for the mission analysis. Then keep an eye on this for
the rest of the course to that you can consult it depending on the subsystem we’re working with.
Margin philosophy.
Margin computed as difference between the value that comes from simulation for example,
computation or number from datasheet for a component and the one with the margins applied,
normalized with respect to value with no margin.

Classification:
First: matter of where I am in the process of the design with respect to the phases in which I am
(conceptual phase 0, preliminary design phase A, advanced design and requirements freezing phase
B and so on), so the design maturity margin, this is one point.
The other point is the TLR of the component I’m considering. Are they available? Already flown?
Still, something I’m developing and I have to qualify in lab? So, level of component. Is it solar cells?
Propulsion unit? Tanks? Material for the structure? Electronics?...

System level very important.


Keep in mind you put margins on the single building block and then a margin on the overall system
for the dry and for the wet. And then you keep the delta margin with respect to the launcher. So,
plenty of margins.

This is because of the design.


Then you also have to consider margin for wherever you have any kind of uncertainties because
even your simulation works with either random or unpredictable quantities. For example, density
of Martian atmosphere for an entry vehicle or the density of earth atmosphere with respect to the
drag because of solar cycles, so anything that still has not a so refined model so that you can reduce
to zero the uncertainty and the margin.

Margin philosophy at phase 0/A – definitions


Design Maturity Margin – DMM
First level of margin, applied at unit level and reflecting the design maturity of the unit. Design
maturity margins (DMMs) are used to cover “known unknowns” and acknowledge the fact that the
resources required by individual units and components increase throughout the study/project as
more detailed design and analysis activities are performed. DMMs are sometimes also referred to
as contingency.
DMM: at UNIT level, not at system level. The scope is to cover the known unknowns. So, they are
unknown because of the level of the project at which you are, not because you don’t arrive to the
real mass at the end of the story just measuring it.
Then the deterministic / stochastic maneuvers.

Deterministic maneuver

Maneuver that can be accurately calculated with no probabilistic effect or error distribution.
So, the deterministic maneuvers are those you can precisely at some point of the mission even
adjusting for the launch if you have refined ephemerides computed, also with data from your
propulsion unit experimental, no more from data sheet.
Launch mass Wet mass + launcher adapter.

Launch mass is the wet mass that includes all the margins and adapter as well.

Launcher margin
Applied on the maximum separated mass and reflecting the accuracy of the predicted launcher
performance for a specific orbit and launch date when it does not originate from the User Manual
or the launcher authority.
Launcher margin the one that shall be kept with respect to launchable mass. You shall stay 20%
away from the limit
NOMINAL DRY mass at launch
Mass estimate of the spacecraft, including DMMs and payload level margins, but excluding the
system level margins, the propellant (and pressurants) mass and the launcher adapter mass.
It is the mass which includes the sum of the mass of each subsystem you have on board with their
margins.

PAYLOAD with any margins the payload might have

But you don’t have the system level margin, this will be included afterwards.
Also, it’s dry. So, no propellant and no adapter.

Then something similar for the power. That is mirroring the mass.
You have all the DMM for the power margins, but you don’t have the system level margin.

Margin philosophy at phase 0/A – MASS

System level mass margin: Third level of mass margin, applied on the nominal dry mass at launch.
Then you have the system level margin. Also, this usually stays in around 20%, classically you
typically eat this. And this is not correct. You shall preserve it. The reason way is this one:
So, you have that every brick has its margin. You do the sum. On this sum you do a margin. Then
you add the propellant, that has margin as well because of the delta v-s margin. Then you have the
whole. Then you add a margin with respect to the launchable mass. You shall not touch this margin.

In our case: if using the scheme seen before for preliminary sizing you shall include these margins
before comparing with numbers from literature coming from the real flight. So, depending on how
you build the comparison you should or should not take into account these kinds of margins.
For the mass we already said you apply system level margin on the dry mass. Then you build your
dry nominal plus the system margin. And then the dry nominal, the system margin and the
propellant.
Note on lander or multiple vehicle you may have, you do the same for each vehicle you have. Then
you sum the whole wet mass. Still having the delta launchable mass with respect to launcher
(mentioned before with the graphs).
Visualization: subsystem (or equipment) design margin at each point.

Then payload with its margin. How many they are.


Then the whole nominal dry mass, adding the
margin for the system.
Then the same for the propellant and having the
whole weight mass, adding the launch adapter,
and keeping the delta with respect to the
launcher.

What you have on the other side is the


case in which you have a lander. So,
this is the lander, this is the orbiter,
the overall system mass is the sum of
the two already margined at system
level.

Some numbers very quickly.

The DMM
5% - 20% depends on what you are assuming. Is that something that has already flown, well known,
so I just keep that solar cell and put that identical on my satellite or I keep the same solar cells but I
change the shape, the cover, part of the manufacturing or I start with flexible solar cells never used.

Classification, category identified. Explained afterwards. All of them are standards.

Product Categories (slide20)


Clarification on the same classification with respect to where you are in TLR. You see the mapping.
Here is qualitative, but you can attach 1/2/3/4…
MARGIN - SYSTEM

System level margin. So, when you have nominal dry mass. 20% plus.
Then with respect to propellant? Keep an eye on delta v-s for mission analysis.
For PM-3 fuel mass, keep an eye on percentage of fuel residual not to be used (ullage ?? typically
said) in the tanks no matter the kind of propulsion you’re using (electrical or chemical) and the 20%
margin for the system.
Last but not least. Do not forget satellite is basically cables. If you open a satellite you have plenty
of cables. Rough idea: for 6 CubeSats we have 200m of cables. 30 meters of cables inserted
10x10x10 cm. that’s a mass!! Shall be not forgotten when you size. At this level you don’t compute
the length of the cables that you need. You even don’t know which kind of interface you have, which
kind of world you have to connect with another. You just do a percentage of the nominal dry mass.

Margins -manoeuvres
Rules for the manoeuvre.

If you have deterministic manoeuvre that means good propagators and models you are adopting
you can stay on the 5%, or 10 m/s depending on which id the highest, so the more robust margin
you can adopt.
If you are evaluating the delta v-s stochastically, not analytically but by comparison, by literature
also, you have to put 100% in the delta-vs. The whole mass will explode if you have in mind the
distribution of percentage of masses among subsystems on board.
Then keep trace of the margin. As soon you change the way of modelling or the manoeuvre you also
change the margin and the flow of the computed mass.
At this level, still phase A, for the attitude control, whatever it is, 100% margins. Typically, this are
cheap manoeuvre but in any case it means doubling the delta-v you’re computing.
Gravity assists. Of course this is the case of both the delta v coming from natural part or the
manoeuvred part, margins to be applied are reported. So planetary 15 m/s, moons of planets 10
m/s.
Same for the navigation manoeuvres. The TCMs mentioned before for example. Margins to be put
in place. Without any consideration about the real propagation at this point of the story.
So in this case you shall have important margins. Even margin the propulsion units in terms of
lifetime end endurance of the propulsion unit. It should be considered the 90% of what is decla red
in the datasheet.
The same for the delta v-s in which this kind of solution is used. numbers are not the same. Higher
both for gravity assist and navigation with respect to the chemical.

05/04

ENVIRONMENT
Interaction with radiation can be useful, we must look at issues due to environment but also to
opportunities.

We must go through these points at each mission phase and translate


them into requirements.

THE SUN
The Sun emits radiations and particles (ions,
electrons and protons), so the environment is
not vacuum neither neutral.
The duration from emission to arrival must be
monitored.
I have to focus on reactions I can have because
these radiations and particles are modeled but
not deterministic, the effects are predictable
with some randomness and must be
counteracted from a design point of view.
(Sunspots and flares are predictable but with
some uncertainty, the effects must be counteracted through the design).
E.g., we can switch off the satellites, not assuming as valid GPS signal whenever I have solar flares
because in these cases the evaluated measure is corrupted. If there is time to react ok, otherwise
think of it during the design phase.
We should have possibility to communicate with ground station and give an advice of future events
and spread the info. Take into consideration the chain of detecting the event, communicate it and
then spread the message.
There are different levels of energy, low energy particles can be also dangerous but in different ways
(interactions between atoms or nuclei), e.g., mission Hermes is sensitive to high energy emission
from two neutron stars or forming black holes (X rays), but this energy is emitted also by Sun so we
have to turn off the sensors or ignore the data from solar flares. So not only the materials are
damaged, but there are other bad effects to take into consideration.
Sun has cycles of 11 years (in 2025 we will reach a peak of emission, both of radiation and mass). As
we will see the more the Sun becomes active, many different effects are produced, e.g., more
dissociated oxygen, atmosphere attacks more the materials, volume of atmosphere increases so we
suffer more drag, more material degradation, etc.… We can use these effects with intelligence, we
might launch at low solar cycle period and dispose when the activity is high.

Sunspots are created by variation of electromagnetic field that could close in a loop because the
Sun is gaseous. Sun is observed using an occultation object on the core and look at the corona
emission.
Proba-3 Mission is a formation flight mission with a very elliptical orbit, with an occulter and a
detector. Orbits are elliptical because at apoapsis we are very stable and have time to make good
measurements.

This is the frequency at which we


can communicate during high
solar activity. Lagrangian points
are the preferred points to
monitor the Sun, long time
visibility of the Sun. Using these
satellites, we can get info about
the activity and predict events.
The frequency of Sun flux was
the only parameter the first
space weather station on ground
could monitor.

Intensity of the Sun with respect to


date and flux of radiation with respect
to cm.
Red and blue= predicted and
measured.
Looking at the number of sunspots
and at the flux we can reconstruct the
activity of the sun, with its cycles of
about 11 years.

There are also graphs showing the future trends for solar activity, which can be used to design future
missions.
For our analysis we can use
average values.

Here is reported the hole span of electromagnetic field but peaks stay in UV band. When pass
through a gas we have absorption, red is Earth atmosphere absorption while yellow is Sun. Venus
has different atmosphere and so the bands are different.
Electrical power subsystem, thermal subsystem (emissivity and absorbability are modified) and
material sensitivity to UV radiation (change of optical properties) are the most affected by solar
radiation. A material that emits can transform into one that absorb.
Take into account end of
mission properties.

The use of solar panels


depends on the distance, the
nearer the Sun the most
convenient is their usage. For
missions towards Mercury or
Venus they are really
efficient, but we need a
proper thermal control.

If we go to Mercury or Venus, we have more incoming radiation from the Sun (we need to keep the panels at
an acceptable temperature), instead we lose efficiency if we go away from the Sun (we need larger panels).

Radiation and conduction are the two mechanisms that


describe what's going on from a thermal point of view.
Hotter you are the worse the solar cells work. We lose
efficiency because we are hot.

Two mechanisms to control what's happening on


thermal subsystem:
1. Radiation: emitted from sun, reflected energy from other bodies (planets, other satellites, etc.…),
emitted from other bodies such as planets.
2. Internal avionics emission.

Albedo and radiation depend on


position and season, high reflectivity
during winter (polar zones) and higher
emissivity when summer (desertic and
water zones). Materials determine the
levels of radiations.
On Mars there is different mineralogy,
that changes levels of radiations.
Albedo affects thermal control, in
eclipse albedo is much lower, we must
consider time varying albedo.

What happens with respect to bodies in around? Albedo + our emissivity and absorbability. UV is
aggressive. We should choose our materials properly because material properties are really reduced
by Sun impingement. Decoupling internal from external. Shield of aluminum because less attacked
-> coating. When sizing, select a material for a functionality, keep an eye on the other functionalities
because maybe the material has good properties but lasts for a short time so it's not good.
When launched, the material has high reflectivity but with time more absorbability due to color
change occurs. Check if still ok because the temperature is different at end of life.

ATMOSPHERE
We are not interested in the region up to 200km (launchers), we want to investigate the ionosphere.
This part of atmosphere is responsible of plasma, electric arcs (because the satellite is always
charged positively or negatively), it's important to ground the satellites to avoid permanent charges.
Effects:
- Drag depends on ballistic coefficient: there are tables to consult;
- External materials must be chosen not to be attacked by atomic oxygen, with different levels
of reflectivity and absorbability, structural parts are chosen looking at stiffness, functional
parts are chosen looking at optical properties;
- We have deposition of ionized particles on the satellite (e.g., charged particles depose on
cameras blinding them over
time);
- Heating (side effect of
drag).

Whenever you design you


should adopt a model, using
data from models, declare it
and choose it based on the
scope. GRAM is a possible
model, available not only for
Earth atmosphere.
The exponential decay model is a rough
model.

Relation with solar activity: the right


graph shows temperature depending on
time. The effects are not only related to
date of launch but also to duration.

There is a jump of two orders of


magnitude in the density because
of solar activity, 5 years before or
after making a huge difference.
For ExoMars, we had a shift of 5
years, but we couldn't resize the
fuel tanks, so? Replan the phases.
E.g., if low activity instead of high
activity, I have to replan the
disposal because the atmosphere
won't offer enough drag to
automatically re-enter.

Decay depends also on ballistic


coefficient (ratio between the
drag coefficient times the
surface over the mass), check
where literature refers to kilos
over square meters or the
reciprocal (EU and USA define it
reciprocal).
At the same altitude, if drag is
larger (mass over surface
smaller) the lifetime is shorter.
Drag affects the disposal. Drag
coefficient is usually btw 2 and
2.5.
The graph shows evolution with respect to eccentricity (or perigee).
Use these graphs to cross check our mission.
IONOSPHERE

Because of UV radiation, biatomic oxygen is dissociated into atomic oxygen, which is very aggressive.
We can use ionosphere to bounce signals and make them arrive to other stations.
Convection thermal effects are considered only for reentry or for LEO satellites.
A satellite orbiting in 200km is more stressed and an eye on convection must be kept.

We can use ionosphere to


bounce signals and make
them arrive to other stations.
Convection thermal effects
are considered only for
reentry or for LEO satellites.
A satellite orbiting in 200 km is
more stressed and an eye on
convection must be kept.

Atomic oxygen depends on solar activity. On Venus the situation can differ.
Depending on the altitude, the higher the flux is, the more atomic oxygen is present, there are gaps
of two orders of magnitude.

Mylar is largely used for blankets for material protection but the mass of the element changes of
35% in 3 days in the ionized region.
The thickness of the truss must be sized for the end of mission conditions, not for the beginning of
life. We oversize but for the lifetime.

If absorbability and emissivity coefficients of a material change, temperature changes. A classical


absorbability coefficient value is 0.8.
Regression of material due to
oxygen impact.
RAM: Surface exposed to flux
(anti velocity vector).
Density and oxygen
dissociation depend on solar
activity, so also material
regression.

On the slides there are tables with reaction efficiency for a large variety of materials.
Sbobina 7 aprile

Plasma
Plasma is made up of ionized gases, there are
three regions where all S/C are affected by:

- Ionosphere (Earth’s atmosphere but


also other planets), where there is a
presence of cold plasma
(plasmasphere) regime at the top of
the atmosphere.
- Magnetosphere, area above the
ionosphere, where particles are
trapped due to Earth magnetic field.
- The solar wind, surrounding the
magnetosphere in the rest of the
volume, it originates on the Sun that blows it throughout interplanetary space (v mean = 468
km/s).

In any case, a mission must face plasma interactions.


However, in proximity to planets that have
ionospheres or magnetospheres, the effects on the
mission are higher. For that, a protection for any
conductive part, electrically sensible, protections etc.
must be taken in consideration.
If we look at the whole environment in figure, we can
see the effects in functions of the energy. Depending
on the amount of energy in electron-volts, we have
different values of currents: from not even detected
to high currents. Of course, if the levels of incident
energy are very high (GeV) there will be damage on
electric components, telecommunications and
messages.
For Jupiter and Saturn that have significant atmospheric effects, plasma interactions are higher.

Polar orbits in general (not only on Earth) are affected by ionosphere and magnetosphere.
Charging effect
If we are flying in a flow of charged particles (i.e.,
electrons or protons), then depending on the material,
the S/C can charge differently. If the propulsion system is
electric, the S/C may have more negative charges on the
surface because electrons, that are faster and less
massive than protons, may close the loop of voltage on
the surface itself: equipotential S/C may avoid this effect,
avoiding arcs etc.…
Plasma may cause interferences (i.e., noise) with onboard instruments that are working in a specific
band in the electromagnetic field, a solution to this effect could be changing the frequencies that
the S/C is sensitive at.
Another solution to plasma interactions may be to
cover the S/C with a metallic coating and then
grounding it. However, if the layers are very thin,
this solution may not be acceptable since it is like
being in contact with direct plasma: the problem
is the thickness of the metallic layer. Another
aspect regarding plasma is the waterfall effect: the
ions impacting the material may splatter the
atoms of the material itself, generating other
more massive ions, eroding the surface. This is a mechanical effect, due to the interaction and
source of other ionized particles.

Example.
The moon has no atmosphere so it is directly in contact with solar wind and the interaction with the
regolith may charge the particles, making them attach to the lander or rover. However, there are
regions on the moon that have a small and weak magnetic field, partially solving the problem of
solar wind.
Electric propulsion works by producing plasma with the highest possible mass particle for having
high momentum resulting in high thrust. However, the protons are slower than electrons, leaving
the surface on a negative charge and I will lose the momentum, losing efficiency (particles coming
back on the surface). Pay attention to plume charged.
The only component that I want to preserve with a difference in voltage are solar panels. If I’m flying
in an active plasma region, I will have losses due to spurious currents generated by the plasma
interaction. The effect will change according to the exposition to sun or eclipse (grazie ar cazzo).
Grounding all over the s/c for avoiding the creation of differential in voltage. There are requirements
for max acceptance on permitted voltage.
Multilayer insulation for thermal control. External material is sixed according to plasma, uv,
reflectivity and absorptivity etc. it must be reflective and metallic or using nets. Vacuum in between
the layers. In the end there are layers that decouple the external and internal environment.
Whatever is coming is reflected and the internal heat is contained. Ex. Thermal blankets are never
stretch out, but always crumpled (stropicciate). That’s because if they are relaxed, there is an
increase in the thickness of vacuum layer, increasing the effectiveness of the insulation. The same
question could be done for electrical layer: the external part must have the same electrical potential
as the internal part, in order not to have electric arcs between the parts. This is accomplished by
using a connective layer. Every time something is connected, a question about the structural,
electrical and thermal interface must be raised: this will lead to the minimization of connection.

Plasma will interfere with all signals sent, not only ground-signals, but also for inter-satellites ones.
Consideration about plasma must be done also for electric propulsion, scientific instruments,
whether there are high voltage systems and for testing and verification.

Magnetic field

A magnetic field interaction is the combination of 3 elements:

- Core must be iron based


- Crust of the planet must be a very thin layer
- Position along the altitude and the Sun’s activity
For what concerns Earth, its magnetic field is divided into:

- Internal field divided into Main, provoked by fluid nucleus, and


Crustal, static field provoked by mantel magnetic rocks.
- External field, provoked by electric currents in the ionosphere and
magnetosphere because of the solar wind interaction, but also made
up by induced currents in the mantel and in the crust by external
variable field.
For the modeling there are very rough equations and coefficients as shown
in the images.
On the Earth there is a big bubble of discontinuity
in the magnetic field and it is located in the south
Atlantic Ocean, approximately around Brazil,
where the magnetic field has a lower magnitude.
It is dangerous flying in that area, also for 400-
600 km altitudes, for that protection shield must
be sized accordingly (aluminum boxes for
avionics) or the S/C might not do critical
operations during the passage over that zone.
Plasma around the Earth and magnetic field may
trap different particles and these effects must be taken into consideration.
The magnetic field is not fixed geometrically speaking, it depends on the Sun position, particles and
activity and rotates accordingly to Earth’s angle with respect to the ecliptic plane. Increasing the
altitude, the effects change according to the Sun behavior, the lower the altitude, the lower the
changes in magnitude due to Sun.

AP index is defined by the currents flowing in Earth’s


ionosphere and magnetosphere. For high numbers of AP
(>100) there is a severe solar storm, while for lower values,
there are low intensity storms.
Now there is a new frontier, SWE, in ESA that studies the space
weather accordingly to the Sun activity and Earth’s
magnetosphere and ionosphere. The idea is to put in orbit
small satellites equipped with sensors for those studies to
obtain a real-time data for 3D modelling of the magnetosphere.

Energetic particles and radiation


Energetic particles sources are divided into
three main groups:

- Trapped radiation belt particles


- Cosmic galactic rays
- Solar particles
Protons and electrons trapped are quite high
in energy density but lower with respect to MeV (1eV = 1.6022*1e19 J) due to galactic rays for
example.
Van Allen belts are a region of trapped particles which,
along with South Atlantic anomaly, represents the two
major zones for discontinuities in electro-magnetic fields.
The regions consist in:
- Protons between 100keV and hundreds of MeV
from 1 to 4 Earth’s radius.
-
- Electrons between tens of KeV to 10 MeV from 1-4
and 2.54-7 Earth’s radius.
Van Allen belts are equatorial regions, the most dangerous zone in the inner one since it is the most
energetic. Outer part is due to electrons that are less energetic, so it is safer flying in there than
inner one. Also, for mechanical collision, the electrons have a lower effect, while protons, that are
more massive, create more damage.
The first S/C that measured the effects of Van Allen belts was Explorer 1, US counterpart of Sputnik.
Also, other planets have the same type of belts, for example Jupiter and Saturn have one. However,
since the two planets have a higher mass, the belts are also larger compared to Earth’s ones. Some
of the moons revolving the planets are inside these belts, so if a S/C will be sent to these moons, it
will be affected by that environment.
There is an online software that simulates the data almost for all planets, of the situati on about the
ions trapped: [Link]
There are also other simulators for other effects such as: Master, for the debris distribution; Drama,
for the decay and fractionation of re-entry and disposal.

Cosmic rays:

- Flux is low compared to the trapped particles


- High energetic -> higher during solar minima.
- High rate of energy deposition as measured by their
energy transfer rate (LET).
For the galactic rays, only a stochastic modelling could be
computed since no prediction could be done. The higher the
altitude of the mission, the higher the incoming flux coming
from galactic regions. Whenever the Sun activity is high, the
lesser will be the galactic flux since Sun activity is “fighting” the
incoming rays. Depending on the mission period that the S/C
will fly, accordingly a shield for galactic cosmic rays must be
built.
The effects of these radiations are:

- Ionization. How many particles are having an electronic interaction with the satellite, in
terms of energy. For that, any system on the S/C must be sized accordingly, for example on
every datasheet for a certain product there is a level of TID (dose of radiation that could be
accepted working nominally). Total dose is flux times time, its value dictates the sizing of the
satellite.
- Atomic displacement. Mechanical interaction, no spurious current is generated, but only in
a mechanical sense.
- Prompt effect. In the string of bits, in the transistors, there may be a change from 1 to 0 or
vice versa that may destroy the entire system.

Ionization
Basically, it is the release of particles from the atoms or nucleus.

Ionization with a nucleus:

- Inelastic collision: incident particles are deflected by the nucleus. Part of the energy goes
into creating photons or nucleus excitation.
- Elastic collision: incident particles are deflected, part of the kinetic energy is given to the
nucleus, as momentum conservation.

Ionization with atoms:

- Elastic collision: incident particles are deflected elastically; energy is not sufficient to remove
electrons from the atoms.
- Inelastic collision: transfer of energy which is able to remove electrons from the atom
(ionization) or to excite them (excitation).
Atomic displacement
- Atoms displaced by their usual sites in crystal lattices.
- Main source of displacement -> energetic protons.
- Solar cells may be damaged with power reduction.

Prompt effect
Anomaly caused by a single energetic particle striking a device: the impact gives rise to an ionized
track of electron-hole pairs along the particle path through the semiconductor.

21 aprile 2022. Slides: SSEO_L12_PS_2022_part2

ELECTRIC PROPULSION
Mechanical energy from different source respect to chemical. Electric or electromagnetic source.
Top-down thrust to weight increases so finer control, more accurate but with less authority: longer
time if big jump in momentum exchange.
Taxonomy:

- Electrothermal (T/W < 1e-3)


o Type: resistojet, arcjet
o Principle: thermal/mechanical energy exchange
The fuel remains neutral, and it is accelerated through heating granted with electric source, an arc
or a resistant. To transfer energy and then increasing the mobility of the propellant and having the
classical nozzle. We can think to use for orbital maneuvers.
Then there are the 2 big domains focusing on the electrical propulsion (electrostatic and
electromagnetic). Both have two main domains (primary and secondary propulsion). The class of
primary propulsion, thrust that is sufficiently high (hundreds of mN). You can think of this application
for orbital maneuvering. And they exist both in the electrostatic and in the electromagnetic. The
difference stays in the mechanism thanks to which the ionized propellant is accelerated. Main
difference is that here I find a way to ionize the propellant and as soon as I ionized, I accelerate the
positive part of the propellant: no more chemical and neutral propellant. Applications to answer
your orbital maneuvering.

- Electrostatic (T/W < 1e-4)


o Type: gridded EP, field emission EP
o Principle: electric/mechanical energy exchange
- Electromagnetic (T/W < 1e-4-1e-6)
o Type: hall (magneto static), pulsed plasma (PPT)
o Principle: magnetic/mechanical energy exchange
Then there is another class with lower thrust. It is dedicated to fine control: secondary propulsion.
You can use for station keeping, for relative dynamic performance and attitude. In the order of micro
newton class. They are lower respect to the performance. They are applied not for large or
interplanetary transfer.
To identify where you can use one technology
according to which kind of criteria. Criteria are
the specific impulse (and so total impulse) and
the power demand. We saw for the chemical the
power budget (not forget it) because you need
some of it. For electric propulsion the power
demand is one of the primary avionics. Classical
trade off respect the utilization. These are the
two quantities reported for electric propulsion.
Sizing driven not by the power required by the
scientific mode but by the thrusting modes.
Thermomechanical solutions stay in a subset
quite near by the one you know for the chemical,
at least for the specific impulse. While if you move to thousands of seconds for the specific impulse
you move to the domain of electrostatic and electromagnetic engines. The magneto plasma
dynamic that put together the two aspects (high thrust and high impulse) is still in development. No
flight heritage. Ellipses regions report the solution applications.

The same is reported here with a different play with the criteria you can look at and the solutions.
From very fine accuracy in controlling your dynamics like pointing up to the primary propulsion.
Thrust level never arrives to even 1 newton so if you want more thrust, you must put more than
one. From the configuration point of view is not so easy. And, if you want to use them in parallel
keep in mind the power demand. As we saw in the previous figure there is an important dependency
on the power, depending on which kind of source of power production that you select. Think to the
solar panels. The thrust depends on the power input at this point and so the control/optimization
variable is the distance from the Sun. You have a thrust that depend strongly to the electric power,
and it has a square inverse dependance from the distance from the Sun. For example, thrusting only
when you are below a certain distance. Keep in mind that is not like with chemical.

Electric propulsion generalities:

Having in mind your building block for the electric propulsion subsystem. Here you see that even
the nuclear reactors are thought to supply the electrical thruster but still in the research domain.
These blocks are quite important to size. Propellant storage and feed system are also relevant
because you have to manage and control the power from whatever is your source towards the
thrusters.
A comparison with respect to the quantities that you have to focus on if you want to design the
thruster. As said for the chemical in this context you are not required to design the thrusters, but to
select them. Important parameters when you use the chemical or the electrothermal propulsion are
the molar mass, the temperature and the coefficient to select your thruster. While for the electrical
thruster the point is the electric charge, the voltage of the beam. You create ions and then accelerate
them. And of course, still the mass of the elements that in this case are the positive ions. Selection
classically for the Xenon, almost always if we talk about the hall and gridded solutions. Differently

for the FEEP (Field-emission electric propulsion) and PPT because you have solid or liquid for other
motivations.
Here a table to have some numbers in your hands. Specific impulse, the power required (KW) and
then the efficiency to keep in mind for the conversion.

You have your deltaV budget and you have to make your decision in respect to what propulsion use
for primary and secondary propulsion. Large and small repeated maneuver like stations keeping and
so on. This is the kind of analysis that you shall run. From Tsiolkovsky equation you know that the

more you increase the impulses the lower the fuel mass will come if you fix dry mass. In that way if
you want to have a percentage of the fuel when you play with the chemical for the electrical there
is almost a half ratio of mass fuel (big advantage for the propellant mass for electrical systems and
so for pressurization tank and so on). You don’t have to compare only the propellant mass. Because
it is true that you reduce the overall propulsion subsystems mass but then you have to look at the
satellite as a whole. What you pay the most is the size of solar panel, RTG (etc.) because this
explodes immediately. The message of this graph is to find the correct compromise otherwise you
explode on the other side.
Let’s go to the 3 categories and six solutions mentioned before.
- Electrothermal: resistojet and arcjet
- Electrostatic: Gridded EP and Field Emission EP
- Electromagnetic: Hall (magneto static) and pulsed plasma thruster (PPT)

Electrothermal

We use the electrical mechanism to transfer energy to the fuel that is neutral. The two solutions are
really simples. Basically, you have your fuel passing through that allows creating the arc giving by
the electric field imposed. The propellant is basically speeded up and then there is the nozzle. As
mentioned, the thrust is quite low. Application reported: good for station keeping or relative
dynamics. The heating is done just by a resistant for resistojet. Impulses are slightly better than
chemical. The point is the TCS control. These are hot points as for the chemical solutions.
Some numbers. Not forget the given lifetime and endurance in flight because this will strongly shape
your control philosophy even during the design. In resistojet underlined the power. Affordable
power but you might have in mind that this could be even half of the whole power demand on
board. Level of power demand not so high from an earth perspective but very high in term of total

S/C power budget.

Electrostatic

Gridded Ion Thrusters


Used as primary propulsion. Again, orbital maneuvering. The principle is to accelerate ionized
particles. As we saw a bit in the environment you have to neutralize the fuel to not have
contamination from the spacecraft perspective and not to have any lowering on the thrust because
of the plasma environment that stays in around of the S/C and the charging of the spacecraft itself.
Classical scheme reported in figure. Electron are put in the chamber to heat the propellant and to
provoke the ions generation. The electrons that are inserted in the chamber are moved in motion
thanks to the electromagnetic field provoked by the elements put around the chamber of the
thrusters. Then you have your electrons that are moving radially, the propellant is inserted axiall y,
and you have the impact and the ions generations.

To accelerate the propellant ions (ions because they are more massive) there are these grid that
creates a high negative potential. Last, there is a neutralizer. An element that bombards your ions
exiting to get neutral particles in the plume.

• Ionization mechanism
o Electron discharge along magnetic field in cylindrical chamber
o Electron bombardment with noble gas flow
• Ion acceleration
o High potential (kV) electric field across a narrow gap between two perforated grids
o Electrostatic ion acceleration through E-field
o Exhaust velocities > 10^4 km/s in ion plume
o Third outer grid to reduce ion exhaust velocities to optimum and prevent back-
contamination
• Ion plume neutralization
o Hot hollow cathode outside engine emits electrons and low neutral gas flow
o Electrons discharge into ion plume, combine with ions to form neutrals
The point is how you generate the mobility. That could be differentiated. This is done in three ways
that categorizes the solutions that are also quite related to the nationality. The one described before
is the electron beams.

• Ionization can be obtained by:


1. Electron beams (Kaufman) - UK/Russia
2. RF: efficiency increased (RIT) - Germany
The field to make the mobility of the electron is provoked by a radiofrequency signal around
the chamber.
3. Microwaves (ECR) – USA/Japan
Let’s pass to the performance. Usually Xenon is applied, at least so far. There are some research
because of reduction of that element. Quite a good efficiency, more than 60%. Another aspect is
that the plume is quite close thanks to the grid (different to the hall thrusters). This power
processing unit density mass to respect to the power is 1/3 higher with respect to the one with the
hall. Keep in mind that you usually have one PPU (Power processing unit) for thruster and you may
arrive to 2 thrusters per PPU but no more.
Some drawbacks:
The loss of efficiency in those thrusters is because of grid erosion. The more you use them the more
the holes in the grids enlarges and the effectiveness of the ions accelerations is reduced. The lifetime
is another important aspect with respect to the lifecycle of the propulsion. Lifetime of the mission
itself, lifetime of the thrusters and life cycle of the propulsion. Still in the order of tens of thousands
of hours. These are not so relevant for you because you are not thruster builders but keep in mind
materials with
i. low CTE (coefficient of thermal expansion) for thermal reasons for the grids
ii. High mechanical strength with respect to launch loads
iii. Low sputter yield (def: the sputtering yield is defined as the number of atoms ejected from
the target material per incident ion and is generally on the order of unity.)
The two best solutions are:

• Molybdenum: low CTE, high strength, but very high sputter yield observed, limiting life to a
few 1,000 hours only
• Carbon-carbon: zero CTE, low sputter yield, extending thruster life to a few 10,000s hours,
but low strength not suitable for large thrusters
• Boron-coated Molybdenum: lowest CTE, low sputter yield, high strength suitable for large
thrusters

Examples of application. Kaufman means that you have the gun


of
electrons,
the
mobility of which is provoked by magnetic field
around the chamber. Focused on the fact that
they have this steering possibility, 2 axes:
azimuthal and in elevation. Important for
configuration program. It became a point of
risk respect to the whole design. Focused on
the different classes. Don’t forget to have a
look to the power demand. As for the chemical
also for the electrical you have building blocks in thrusters. Then you have the power supply and the
unit for the flow control. You won’t see anything in this presentation but still the propellant must
be stored and pressurized.
This is what I was saying about the power and basically also the feeding voltage. Impulse and thrust.
The more power you have the better it would be. And the same for the voltage. Just to keep in mind:
the voltage control is not the case. You cannot change the voltage if you don’t insert transformer
for that. You usually work at fixed voltage. Simpler architecture for the electrical power supply on a
spacecraft. Take care of any voltage variations if you want to play with impulse and thrust no matter
of the Sun.
In the Artemis mission there were on board both the Kaufman (mobility through magnetic field with
the coil around the chamber) and the radio frequency thrusters. I want to discuss the positioning in
the configuration of the thrusters. The coupled is related not to switching them simultaneously but
for endurance.
Numbers for electrostatic. Have in mind where they come from. Is not your case because you do
not implement the control law but in general as already mentioned you must consider the lifetime.
The lifetime shall be margined quite significantly, 10% less to the datasheet. Critical constra ints to
keep in mind.

The more we play with the voltage which is imposed to the grids the more beneficial are the
performance of the thrusters expected, but the mass for the EPS and the whole architecture could
significantly increase. Also, a converter for the voltage because you usually have low voltage
generation at the solar cells output. This technology starts even from the micronewtons up to
hundreds of millinewtons. We said about the grid stiffness and strength comes over there.
Frequencies for the mechanical analysis at launch. Classical random frequencies, sine behavior and
shock applied when doing testing.
FEEP thrusters:
This is the very precise thrust level, fine control. The scheme is a little bit different. There is a liquid
propellant (cesium, indium, rubidium, mercury). Then there is a slit, a point effect, of the size of the
particles that are passing through and then ionized by the field imposed at the end of the slit.
Accelerated and then neutralized as well. It could be continuous; you will see to the PPT it will
necessary pulsed. Here there are both the possibilities. You have no moving part. This could be an
advantage if you are dealing with fine control.

• Thrust produced by liquid metal


field ionization + ions acceleration
by strong electric field. T level very
low: 1-100 microN, good for fine
attitude control
• Instantaneous switch on/off
• Power-to-thrust ratio 60 W/mN

Most classical application is on LISA


mission (gravitational waves). Relevant to
not have any moving part.
Electromagnetic

Hall effect thrusters


The scheme is similar to electrostatic; the most important difference stays in the utilization of the
motion. Still, you have electrons inserted, still you want to create the mobility of the electrons to
ionize the propellant. Now think of I don’t want to have the grid erosion and I want to keep the
performance nicer respect to time. This is possible if I can have a cycling of electrons at the mouth
of the thruster. This is idea: electrostatic field not thanks to physical grid but thanks to combined
effect provoked by the induced current by the electrons that are moving in the tube. So that I localize
these circulating electrons over there so that they accelerate the ions created inside the chamber.
You don’t have erosion of the grid. You have other issues that stay basically in the aging of the anode
and the cathode. The two coaxial elements in the thruster. The other drawback is that you have a
plume that is larger than the gridded and so the mechanical power because of the plume is less
concentrated in axial direction and so is less efficient from the mechanical momentum exchange
point of view. As we did before some comparison and performance.

• Ionization
o Electrons generated by external cathode
o Electrons injected into dielectric annular chamber,
drawn towards axial anode
o Radial B-field between inner/outer poles of magnets
o Lorentz force on electrons crossing radial magnetic
field lines causes electron cyclotron motion in
chamber -> helical path towards anode
o Neutral gas injected into chamber collide with
electrons
• Ion acceleration
o Acceleration of ions by the self-established E-field
created by the electron current induced by Lorentz
force
• Ion plume neutralization
o Additional electrons emitted by external cathode
o Drawn into ion plume (+ve), combine with ions

Performance characteristics:

- Lower specific impulse than GIEs (gridded ion engine): 1,800-2,600 s


- Lower thrust efficiency than GIEs: 50-60% (90% ionization; 70% discharge)
- Higher thrust density: 10 N/m2
- Lower specific power: ~15-17 W/mN
- Propellant: Xe, Bi
- Course but wide throttling capability
- Wide angle ion plume: 45-60°
- S/c integration issues: contamination from ceramic chamber erosion, external s/c surface
erosion from wide plume, EMC tests
- Lower voltage (300-500V) -> lower PPU mass (~ 1,5 kg/kW)
- Simpler design (no grids) and greater reliability than GIEs
Some examples of those. Not a matter of lifetime but how largely this units flew and so the
cumulative endurance.
This is an example from a European company using the hall effect. They came from Russia.

PULSED PLASMA THRUSTERS: PPT


Last technology solution that stays at the level of the FEEP with the electromagnetic application is
the PPT. Here you have the solid propulsion that is pushed into an electric arc field by a spring. The
idea is that you have your electric field to create the mobility of the propellant particles then thanks
to the electric elements in motions you have an induced magnetic field and so at this point the
acceleration is given by the Lorentz force (right hand law).

• Features: high specific impulse and low power and fuel requirements; pulsing
• Application: SK maneuvers
• Fuel: Teflon
• Principle: Energy is stored in a capacitor; an ignitor shoots electrons between anode and
cathode to discharge the capacitor and create and arc; the arc evaporates and ionizes the
solid fuel which accelerates out the thruster by Lorentz forces provoked by the induced
electromagnetic field. The capacitor is then charged up again from a power supply and the
pulse cycle repeated.
An example of it is reported. To give the idea on how different the shapes of those are. I just want
to focus again on the power, still quite limited. Keep in mind these orders of magnitude. Three
orders of magnitude below the other solutions. No way to use in center of mass control. They have
been proposed for orbital control in the domain of CubeSat. But in classical satellite FEEP and PPT
are not feasible for orbital maneuvering.

Features:

• Nontoxic propellant
• Low power demand (50-70W)
• High specific Impulse 650-1350s
• Very small bits 90-860 µN-s
• ingle capacitor → multiple thrusters
• Mass 5-6 kg
Focus on the graph. Spacecraft mass on
horizontal axis and ACS mass in vertical
axis. The comparison is between difference
impulses for the PPT and the reaction
wheels. As expected with the increase of
S/C mass the mass of momentum
exchange devices increases almost linearly
while for the PPT basically remains the
same. To stress the advantage if your
criterion is the mass. There is also the
comparison considering the saturation
part with monopropellants.
This is just a memento to what I said before. This should be the Bepi Colombo architecture. Classical
storage for the fuel. High pressurized fuel. Then you have four thrusters. Two PPU are controlling
two thrusters each. These four thrusters were activated simultaneously.

Electric propulsion
summary

Propulsion solution
comparison
03/05/22

TTMTC: ENCODING
Last time we stopped with the coding, one of the two meaning of the coding is protection towards
errors and we roughly saw the two large categories that we can play with, so on one side the block
encoding, the other side the convolutional encoding. The message is: this is a step you have to
finalize, is part of your trade off. But keep an eye on data volume, so the data rate you provoke.
The bit rate is different with respect to the data rate because you include also the encoding. Roughly
speaking, the minimum is the same number of bits you selected for the resolution of your data. In
the worst situation it could be even doubled, depending also on the kind of modulation you select.
We also entered this kind of references. Many
graphs depending on the topic. On the x axis you
have always the same, so energy versus the noise to
digitalize massages in the bit. On the vertical: error
you accept to have on a bunch of data. As a recall:
the science or the payload data are those you can
sacrifice, not the telemetry! So, in general the BER
for telemetry, that is the housekeeping, so the
health of your platform is more precious than
science or payload data. So, of course it is settled at
the lower level. So, roughly speaking the BER is set
to 10-7/10-6 for telemetry, 10-5 for the science.
The entry point depends on which is the driver in
the mission, so is that the technology you have in your hands, so E b/N0 is somehow fixed (for
example by power that you have); or is the criticality of the mission, so that not to have error in the
telecom is mandatory, so you enter with the BER, so be flexible in that.
Then, what is classically represented depends on the topic. In this case you see what has been
compared is the relationship between the power at the receiver, taking into account the noise, and
the error that you can preserve is parametrized with respect to the encoding. Then you have graphs
with respect to the modulation, with other meaning on encoding for the PCM (Phase Code
Modulation, so phases modulated signals that we are going to see) or with whole of those.
Be careful about reference of the graph and what is plotted. Is that just the coding? Coding +
modulation? Is the way you shape your digital signal + the modulation?
And of course, as expected, the more you work with encoding, the better it is in terms of E b versus
N0. The goal is not to increase the E b, but to be capable to face the noise with respect to a very low
Eb, so I can perceive, I can manage a very low signal with respect to the noise that I have.
You use these graphs to fix for example, you selected BER, you enter, select convolutional coding
Rate= ½, K = 7. Then you exit with the expected minimum E b vs N0 so 4dB. At this point you margin.
Here the margin is at least 3dB. So, when you do your link budget by yourselves you keep this
number 4dB as a reference and put as a threshold for you 7dB. Use your formulation for link budget
reported afterwards. You compute it and then compare your output with this 7dB let’s say. If you’re
upward that’s ok. If you’re lower than that you shall retune. Is that a diameter? Is that a frequency?
Is that a power? A temperature? All the dofs you have. It could even be the distance at which you
do the contact, so operations. So, you reduce, you
don’t talk in between apoapsis and periapsis, but you
select periapsis only. So, you have a lot of dofs to
tune to stay in that limit. So, graphs used to have a
threshold, to have a margin, and then to size.

The signal modulation - Analogue


When you have your data, you have quantized your
data, you have formalized the way you want to tune
them, you have encoded them with respect to the
errors, you have to put them on a carrier because the
signal to be received needs an antenna, needs an
oscillator of the size of the length of the signal. So, it is obvious you need high frequency, otherwise
you would have oscillator of size that makes no sense for a spacecraft or even for a ground antenna.
So, you have many different ways to modulate your carrier, here you have what you can do for an
analogue signal, but conceptually for a digital is the same. So, you have a tone you are going to
modulate (look at the picture top-down) and you can represent it with an oscillatory formulation.
At that point you have your selection. You can modulate amplitude or tune the frequency or you
can modulate the phase of your carrier with respect to the signal you want to send. Of course, there
are differences between those. Mathematically speaking, final signal reported. Still oscillatory, but
different formulation. Not the point for us. The point is basically this one: look at your carrier
frequencies, in the frequency domain.
Signal modulation & channel bandwidth
The important point is the band in which your signal
is contained, because when you demodulate the
signal at arrival you have to consider in which band
around the carrier frequency you’ll keep looking, in
terms of power, in terms of the signal you want to
receive. So, the benefit of an amplitude modulation is
that you have a fixed band around the carrier. You see
in the picture you have your carrier frequencies, then
you have specific bands and frequencies at which you
have the whole content of the signal you’re sending,
so the power is all contained in a band carrier minus
modulation and carrier plus modulation. You can play
with the so called “modulation index” β, β = ∆𝑓𝑐/𝑓𝑚 (∆𝑓𝑐is the carrier frequency deviation), that
means how much power you put in the carrier, and how much power you put in the modulating
signal, so when you demodulate you know where the actual content is. What it is important now is
to have in mind that this is the benefit of the amplitude modulation, and it is of course the contrary
of the frequency and phase modulation approach. In those cases, you spread the power in quite
indefinite bands, with no limits and it’s up to you to define where you cut around the carrier as you
see in the picture, so at which point you make your cut and consider the power up to a given
frequency offset with respect to the carrier, it is a matter of the band limit selected by you. This is
the drawback of the frequency and phase. On the other side they are less affected by noise, and
therefore classically space never uses the amplitude modulation, classically uses the phase
modulation.

PCM Encoding (phase code modulation-digital)

The other building block is the other meaning of


the encoding. This means I have a digital signal,
so a sequence of ones and zeros. I modulate in
this case the electric signals that will modulate
the carrier, is up to us. Here you have all the
possibilities. Have in mind: the signal you have
to send is the one at the top, already coded with
respect to bit error so you’ve already made the
decision of either convolutional or block
encoding, so this is overall block you have to send, then up to you.

Two examples in red: simplest is signal of 5th row. Zero is zero, then I change whenever I have a one.
Another way: I change when I have a jump from zero to one, I don’t change when I have from one
to zero. Plenty of strategy I can adopt. Difference? Is the number of changes you have, and so
number of signals you have to modulate on the carrier. So, you see you are playing with the volume
of data you have to send, therefore the data rates, therefore the noise, therefore the link budget,
so the pure power of the signal you have at the end of the chain.

Decision making, tradeoff for playing on the overall link budget.


Modem: the signal modulation – Digital

Carrier modulation alternatives →


You have your PCM. You can do the same we’ve seen with
the analogue signal. You can modulate the amplitude of the
carrier, the frequency and the phase (you have these 3 dofs
in electromagnetic signal you can use for communication,
play with one of them). Either digital or analogue what you
do for the modulation is schematically speaking the same.
Signal modulation & encoding
This is quite relevant for you at a very high level of sizing,
not being a telecom engineer. Let’s look at the right side
of the chart. Top-down the signal you want to send,
encoding already included, different way of phase or
frequency modulation. On the left you see how it is
coded (not the point for us). The relevant point on the
right. R is the data rate, that coincides with the
bandwidth. So, you see that depending on the kind of
modulation you have, you have the data rate imposed
by the modulation you select.
If you start with a given size of the data, if you go with
the BPSK (binary phase shifting) modulation you keep
your data rate. This is already positive; you have no
exploding effect on the data rates to be sent. The one typically used and that you should select is
the QPSK (quadrature phase shifting) change, that means every 90° because you have beneficial
effect. Imagine you have your string. String is your sensor data. Now you enlarge it because you are
encoding it to protect it from errors and you use a convolutional, so you become 2R as a string. If
you use QPSK you jump back to the size of the starting point with the benefits of being already
encoded with respect to error protection. In some sense you didn’t explode.
This is how you read the chart. Depending on the kind of the modulation, you’ll have an effect on
the starting point of the data rate. Again, recall: R in this chart means the massage that has already
been encoded with respect to the error, so I just have to modulate it. It is important for the link
budget! This number R stays at the nominator of the formulation, so in the environment of the noise
computation, so is decremental with respect to Eb vs N0, driving with inverse law the level of E b/N0
you’ll have at the end of the chain. This is why you keep R large because you have a lot of data, but
limited because you want to reduce the noise.
Different solution in modulation, so putting your signal already encoded on the carrier, you have
pros and cons in each of those. The one that is typically used has a very nice utilization of the bands
you have in terms of power, while the last one in the picture uses badly the spectrum, if fact you
enlarge the utilization of the band that you have. Recall the data rate is strictly related with the
bands that you have; the band is the amount of frequencies you can use to include the signal in
around the carrier frequency.
The smaller it is the better is for the signal receiver.

Error versus modulation: PSK


In this chart for design, you
see same axes, with respect
to different operations you’re
doing.

In this graph you have


modulation, in the next maybe the one for encoding for PCM, so to evaluate all the dofs you have
on encoding and modulation. As expected, the QPSK is the one behaving nicer. Recall utopian point
is at the bottom left, that is minimum error and capability to get the signal with the minimum
energy.

Error versus coding/modulation


Same representation, but now you see the
comparison with respect to the PCM, so the
way you prepare your string with respect to
the voltage that then modulates the carrier,
so the data rates to be sent, reduction in the
data rate and the overall, so combination of
encoding and modulation, so the classical
chart you should have found in books.

Analyzing the transmitting power


In the chain of TX, that is also the chain of RX, you have your data, if it is analogue, you digitalize it,
you encode it, you modulate it to be sent, then you have the amplifier to send the signal. At the
receiver you have the same but the other way around.
s\s components: High Power Amplifier

In space 2 kinds of amplifier to be considered.

➢ TWTA (Travelling Wave Tube Amplifier)


➢ SSA (Solid State Amplifier)
Historically the TWTA (Travelling Wave Tube Amplifier) where the classical used. Very nice in
efficiency with respect to the power, electrical power towards the transmitted, emitted power.
The power you put in the link budget is the output of the amplifier! Not the electrical power that
you get from EPS, because there is an efficiency in between.
But they are not so nice in terms of volume and mass. This is
reported in the graph in which you have at the bottom: output
power in terms of radiofrequency, so the one you put in the link
budget. On the vertical: mass and power needed, that is the
demand to EPS or what EPS gives you as a percentage to be
used for the telecom depending on the mode. With mode I
mean telecom mode, so are you using a low frequency?
Omnidirectional or very directive antenna high frequency?

Of course, you have the other way around with the solid-state
amplifier. So very light but less effective in power. Usually, you use the SSA when the power you’re
playing with is quite low, otherwise you select TWTA. Depending on you. Reverse engineering: let’s
look what is on board and give justification. Adapt mass and power demand accordingly. In real life:
other way around. One of the trade-offs you do depending on the power that you have available
and depending on the mode. Graph obvious to be used to your cross check: enter from the power
and the mass point of view and exceed with power.
s\s components: Antenna radiation pattern
We digitalized, we encoded, we
modulated, we amplified. Now we
send the signal. So, oscillators and
reflectors. As you know, the
power emitted by oscillator is
isotropic, so in principle the
further you are the lower the
power will be at a given distance
because you have the same power
distributed all way over the sphere
surface, as reported. So, the
power decreases with the inverse
of the square of the distance at which you are.
Therefore, the scope of a reflector: to converge that power in a given direction. So, the more you
reflect the isotropic power from an oscillator in a given direction, the more the gain is, because
you’re concentrating the power in one direction. So, this is basically the whole power emitted
isotopically, concentrated in an angular direction, thanks to a physical element, for example an
antenna, a parabolic antenna, to direct the signal. It is called the gain: the ratio between the
maximum flux with respect to the one that is isotropic. It could be related directly with the size of
the antenna and the frequency you’re adopting. So, the gain is something you can play with on
board, selecting the kind of antenna, selecting the frequency, selecting the size.

From the ground station point of view, you classically have the same flexibility: at the ground station,
the antennas that are available are many. You can select not only the frequency, but also the
diameter of the antennas that are there. This is part of your link budget as well. Ground station is
not just selecting the location. You select the location, the frequencies, the size of the antennas, so
the gain basically. In this slide focus on the limit that is important for the beamwidth, that basically
is where the power arrives at its half maximum, so at 3 dB, so the tangent of the lobe, so that in this
way you define the beam width. The beamwidth, theta, is the aperture of the main lobe, we don’t
take into account at this level the side lobe, only the principal. Why is this important? Because of
course, the more you increase the size of your reflector (this is for parabolic antenna only), the more
you focalize the power, so the less will be the demand for the EPS. But drawbacks will be of course
for the attitude. You save the power for the TTMTC but it might be the case that the power for
pointing that antenna towards your target is higher than the power requested to send the signal.
So, keep an eye on the whole sizing of the system. Not just saying “ok there will be more maneuvers
to point and to keep the main beam towards the on-ground antenna or other satellites antenna”.
Because it might be the case that actually it is better to have a larger beam, more powerful the
TTMTC, less powerful pointing system.

Of course, this is basically the peak of power, that is the power of the isotropic times the gain, so
the gain is in your hand, at least the one on board. You have a bit of discretization also on ground,
not as large in the domain as on board.

s\s requirements: Beamwidth constraint

Note that the scheme, simplified scheme of visibility between from your point on orbit and the
ground contact that you shall have (this is the Earth, but wherever you are). Visibility is of course in
terms of the tangency. So, basically, you have your planet, your orbit, your ground station. Of
course, you start and moving in this way. You start having the visibility as soon as you are tangent
to the location, and so this is the whole visibility window, ideally speaking. Then for many reasons,
there are the atmosphere, if you have any, that will disturb the signal; the morphology of the terrain,
that will obstruct the signal as well, you must keep the ε angle, so you start having your contact with
ground a little bit later, a little bit onward, according to an angle that is called masking angle or
elevation angle. This stays between 5° and 10°, of course this happens at the entry but also at the
exit of your visibility window. It is just to say, whatever you are cross-checking or sizing, keep in
mind this shall be the situation. Maybe, depending on the ground station you’re selecting, on the
data sheet of the station is reported which is the masking angle to adopt, but if you don’t have any
info, 5 to 10° is the correct margin to adopt.
Antennas characteristics
In the table: you have the different kinds of antennas you can use with the formulation to compute
the gain. There is always a quantity in the gain formulation that depends on physical size. So, this is

the diameter for the parabolic reflector, this is some of diameter for the helix, the length and the
aperture for the horn, but the formulation is a little bit different. So, recheck and control twice what
you did with respect to the gain, not to adopt the parabolic if you don’t have any parabolic on board.
Then which one of the antennas to select? It depends of course on the beamwidth and on the
frequency you are adopting. So classically high frequency (RF) stays in the parabolic, and otherwise
you stay with the patch or the phase arrays and the horn for the medium frequencies.
Antennas main alternatives

Examples, physical elements:

• Horn antennas are medium gain antennas, good for S band and lower, this 4GHz is just a
little bit upper with respect to S, C is just upper the S that stays in between 2, 2.5 GHz.
• Helical antennas, used for low data rates or, for sure not science, but low telemetry data,
• Patch antenna. The patch antennas are very nice with respect to configuration point of view.
Not necessarily with respect to the beam, 3-7 dB is a low gain (50,60,70 dB is high gain
antenna) but very nice because they are easy to be located on the configuration when you
don’t have room for.
• Phased arrays, it could be a very nice solution. The
gain is obtained not with a reflector, but by
interference from the signal, so you have an array
of emitters as you see in the picture and then
depending on how the signal is sent, you have a
constructive or a destructive interference, so the
beam is built electronically. Drawback: all that
parts stay inside the satellite, so this is the front
outside arrays of omnidirectional, this is the
electronics, and as you can imagine, criticality
comes from the thermal point of view, because it is a hot part of the system, and for the
software managing system. You increase complexity in another domain. But you preserve
for example from mechanism that is a risky part of your system. Typically, mechanisms are
included in antenna systems to have more dofs for pointing and of course on the solar panel
(not the case for this discussion).
Sizing the system: Losses
Now take into account we have our signal that is travelling out, spread concentrated as we want,
isotropically directed with the kind of reflector that you have. There are losses and noise to keep in
mind. I go fast (check the exercise with Andrea).

- TRANSMISSION LOSSES
➢ Free space
The first loss you have to consider is the signal travelling in space. It means that necessarily,
whatever you are spreading around, depending on the distance, you are reducing it with the inverse
square of the distance. So, the SPACE LOSSES are something you pay for necessarily. They depend
on the distance and on the frequency that you have. The higher the frequency, that is put for the
arrival in terms of power, is, the higher the losses will be. (you’ll see example afterwards)

Have in mind: when you select the frequency, at the very starting point you select as you prefer. But
then you have to cross check with the ITU, so with the International Telecom Union regulation
whether you can use those frequencies or not, so you are restricted. Have in mind this squared
formulation.
So, the effects of the frequency on the received power depends on your degree of freedom. Because
the frequency enters the gain, the losses (any: space/ atmospheric/ rain (still atmospheric)/ noise
as a whole), so it depends on which quantity you assume as a degree of freedom whether or not
the frequency has a beneficial or a detrimental effect, and this is the example. So, if you look at the
received power, just having playing with transmission power, after the amplifier, the gain of the
antenna on board, the gain at the receiver, then you see that increasing the frequencies will lower
the power you receive. Now, if you play with the effective areas, and not with the gains, because
you involve the frequency as well, now the frequency has a beneficial effect at the opposite. If you
keep as a variable because of situation for your sizing, in terms of gain for the transmitters, effective
area for the receiver, now it is independent from the frequency so pay attention on which you make
your decision to be your variables, and then play accordingly with the frequency. So, there is not a
fixed statement, because of the dependence that you have. Therefore, at the end of the day, for the
receiving power, an important variable might be
something that doesn’t affect at all your sizing and you
have to focalize on something else.
➢ Misalignment losses

If you have directive antennas of course ideally you


want to have them perfectly aligned with the
maximum power in the beams, this will never happen
because of errors of attitude control, errors in
synchronization between ground antenna mechanisms
and your spacecraft, so you shall include this error in
terms of losses, that of course depends on the beamwidth you have and error accuracy you have in
your system, that is the small e.

➢ Atmospheric losses
Molecules in the atmosphere might be excited and then stop the frequency you are playing with, so
of course this is a smooth effect, depending on the presence or not of rain, of water basically (see
the graph), so select your frequency, identify the loss to be considered. The same with dry
atmosphere. This is for Earth, similarly with other planets, in case you had to communicate with
ground of Venus/Mars, you shall do. A note: look at where ground stations are. They are in dry
locations, flat locations or very high in altitude for disturbances in atmosphere. If this is not the case,
ask yourself why. What constrained them to select ground station largely affected by rains, for
example Redu in Belgium in the middle of countryside, lot of rain. Why?
Peak you see around 60GHz is excitation of water, of vapors in the atmosphere, that is way you shall
stay out those frequencies.

- NOISE
“This is the last part”
Counter part of your positive effects in terms of gains, power sent, transmitted. In the noise the
band enters. I recall you the bands is basically the data rate and the data rate depends on the
modulation, the encoding you selected. That is why you shall keep it lower at the maximum. And
then the noise is affected by the temperature.

k=Boltzmann’s constant, 1.38e-23[WsK-1]; B= equivalent noise bandwidth [Hz] (1.12*B-3dB); Te=


equivalent noise temperature [°K]; N0=kTe= noise power spectral density [J]
Te is an equivalent temperature. For the antennas and for the passive components is basically the
thermal temperature. For the active components you have a factor of noise to be included. Here
you see the chain to compute your equivalent temperature.

Te is the thermal T for the antennas, same for cable that are passive.
For active elements you shall keep a figure of merits, that is basically an efficiency in power
transmission, or power conversion.
I skip those chart (see Andrea)

Noise: Environment effects

You see frequencies and noise for the


antennas in terms on Eb.
(she’ll put a legend)

D refers to the Sun in FOV of antenna.


To have 20/30dB noise is a nightmare.
Classical level of noise shall stay in few dBs.

LINK BUDGET COMPUTATION

Link budget equation:

Verify in the design. Eb versus N0 is what you compute as mentioned before, this is compared with
the graphs and depends on selections of coding and modulation. This transmission power is the
power coming from the electrical power subsystem, with the efficiency of the amplifier you
selected; line losses at your level could be 0.1/0.2, so very low losses that stay in the cables that
connect all the elements inside the satellite before sending the signal; the two gains, already
mentioned, so if you play at this level the frequency has a beneficial effect, if you just open those in
terms of characteristical size, length, not necessarily diameter, pay attention to the antenna you’re
taking into account. If it’s a parabolic it’s the classical you have in mind, check if it is not to use the
correct formulation for the gain. This is for the antenna on board.

I didn’t mention but this equation is applied whatever, so downlink/uplink, you just reverse the foots
in terms of who is receiving. Then space losses, we saw frequencies and distance. And operations,
because it’s up to you/ designer understanding which are the slots in the orbit used for
communications and therefore, you have a link budget depending on the phase, and then
depending on the antennas, and so on, not just one for all.
The atmospheric losses, to keep in mind: if you have your signal going through any medium that is
not the vacuum, of course you’ll have variation in the signal, power lowering;
and then the noise in which you see, at your level, if it is a downlink, you can assume as equivalent
temperature the one that is given from ground station. Ground stations are cooled usually, so you
have a temperature that stays even at 80 K, so very low temperature, very low noise, on ground can
be done. On board this is not the same. On board you have your satellite ambient temperature, so
for the uplink the noise is largely different with respect to downlink, at the command received. But
when you send the commands, so if you’re receiving on board the data, this data rate wi ll be
definitely lower with respect to transmission space to ground, so in this way you save the overall
noise you want to minimize. Here you see how important is the role of playing with the signal to be
sent in terms of tuning the power to noise at the end of the day.

This number shall have the 3dB margin with respect to the graphs we mentioned before.
-----------------------------------------------------------------------------------------------------------------------------------
------------
EXERCISE TTMTC
Usually, when you talk about telecommunication subsystem, you have to design a communication
link in terms of transmission and receiving frequency selection, beamwidth selection, antenna gain,
diameter selection, power selection… all input data you need to size communication in terms of
antenna, antenna dimension, mass and data that are system noise density and error over noise
ratio, that are two information you need to understand if your type of communication is ok or not,
so if the transmitter and the receiver are able to send and receive some data, if they are able to
understand, to translate the data into what real data means.
After design and sizing, you have in output power budget, mass budget, requirements,
configuration…

EXAMPLE: Galileo spacecraft


Galileo is complex in term of communication because the payload is basically a telecommunication
system. Also, Galileo needs to send on ground the telemetry and to uplink and to upload the
commands from ground. This is done with two types of communication, UHF and S band. Today:
compare basic formula for sizing with the actual needs of Galileo in terms of telemetry downlink.

DATA:

S-band telecommunication link


Frequencies: f (Rx) = 2048 MHz in receiving; f (TX) = 2225 MHz in transmitting.
Today we take into account the downlink example, so from the spacecraft to the ground, the
telemetry. The frequency we need is TX.
f = 2225MHz; Downlink Data Rate = 10.4 kbps; antenna on Galileo that perform the communication
link for telemetry is a Helix Antenna (D=24cm, L=34.6cm); TWTA amplifier
We need some data about Galileo orbit: MEO orbit R=29800km; i=56°; Ground Station ESTRACK
(even if they are more than one); Pinput = 36W (not the correct data for Galileo, he hasn’t found
the value, supposed)

Let’s start with the carrier frequency selection. In this case, information we have.
But when you start doing a design/ sizing maybe you don’t have this data, you’re creating from
scratch the mission. In order to select the carrier frequency, you need some information about the
data rate that you want, higher data rates correspond to higher frequency needs; the ground
station, so what are the range of frequency the ground station can manage. So the gain, low gain,
low frequency, you need bigger antenna, then high frequency you need smaller diameter…
First thing to do: starting from input power, knowing the amplifier, compute actual power we use
for the transmission:
Transmitted power depends on efficiency of the amplifier times the power in input for the S-band
communication. If you don’t find anything
about the efficiency of the amplifier, just go
through this graph, you can assume a value
for the efficiency. Take input power (36W)
TWTA curve, you find more or less efficiency
of 𝜇𝑎𝑚𝑝 = 0.45. Conservative number.

𝑃𝑡𝑥 = 𝜇𝑎𝑚𝑝 𝑃𝑖𝑛= 16.2 W

We have to make another assumption about the BER,


That is the minimum error we want to have in the communication. Typical value for telemetry and
data downlink and telecommand uplink:
The selection of the BER (Bit Error Rate) depends on the application; usually is higher for telemetry
and data downlink and higher for telecommands uplink.

▪ BER = 10-5 (TM and data downlink)


▪ BER = 10-7 (TC uplink)
Uplink of TC we want the minimum possibility of error!! BER lower with respect to downlink of TM.

We select in this case BER = 10-5.


Now deal with modulation and encoding.
Modulation usually improves the amount of data
rate. Data rate: amount of information we want to
send on ground in a second. With the modulation
we try to improve the amount data rate if we keep
constant the beamwidth. On the other way around,
with the encoding, we reduce the amount of data
rate, if we keep constant the beamwidth. This is a
design choice. Of course, if we select a kind of
modulation, encoding , and at the end of the sizing
you find that your signal to noise ratio doesn’t
respect the margins you need, between the other
things you can try to change the modulation and the
encoding in order to get a signal to noise ratio that
is better.
As you see from the graphs, selecting a kind of
modulation or encoding forces you to have a minimum error to noise ratio, depending on the Bit
Error Rate that you want. For example, BER=10-5, enter, depending on modulation and encoding,
find minimum Eb/No.
Once you selected the encoding and the modulation, you need encoding coefficient and modulation
coefficient. Now, to compute the real data rate, multiply by the encoding coefficient and divide by
the modulation coefficient.

In this case, I selected the most common choice for modulation and encoding.

For the modulation: BPSK with coefficient 𝛼𝑚𝑜𝑑 = 1


1
For the encoding: convolutional encoding 𝐾 = 7, 𝑟𝑎𝑡𝑒 2 , 𝛼𝑒𝑛𝑐 = 2

𝑅𝑑𝑎𝑡𝑎,𝑟𝑒𝑎𝑙 = 20.8 𝑘𝑏𝑝𝑠

Next step: design of the antenna.


So, if you start from scratch, you have lot of choices. If you have selected the kind of communication,
so S-band for example, some types of antennas are more useful.

For example, usually S-band communication are done with Helix or Patch antennas.
Indeed, in this case we already have data for the helix antenna.
You can find experimental formulas that knowing two of three variables, that are the diameter of
the antenna, the frequency, the beamwidth, you can find the other one and design, find all the
properties of the antenna.
Of course, there are antennas with higher and antennas with lower gains, this depends on the type
of communication you want.
We selected the helix antenna. We know the diameter and the frequency. From the frequency we
compute the corresponding wavelength. We are able to compute the gain.

There are some experimental formulas, or you can use analytical formulas. Similar results.
Antenna gain depends on the diameter at the power of two of the antennae, on the efficiency of
the antenna (depends on the kind of antenna) and the wavelength squared.

𝑐
𝜆= = 0.135𝑚, 𝜇ℎ𝑒𝑙𝑖𝑥 = 0.70 → 𝐺𝑎𝑛𝑡 = 8.42𝑑𝐵
𝑓

Of course, this is the transmitter. We also have to design the receiver, and this depends on the
ground station you select.
Galileo uses ESTRACK. He didn’t select a particular GS. We can take the most constrained case.
ESTRACK diameter from 13.5 to 35m. In this case we use the smallest one, to have the most
constrained link budget, to see if we are able to communicate with the GS in the worst-case
scenario.
Also, in this case you have the diameter of the antenna, the wavelength (communication is at the
same frequency) and you have the efficiency of the antenna (in this case parabolic), so we are able
to compute the gain of the receiver.

𝜇𝑝𝑎𝑟𝑎𝑏𝑜𝑙𝑖𝑐 = 0.55, → 𝐺𝑅𝑥 = 43.29𝑑𝐵

Of course, if you have the data sheet of GS, use that information.
Knowing the diameter of the antenna and the wavelength, you are able to compute the beamwidth,
with this formula, formula for the PARABOLIC antenna. Experimental formula.

= 0.588°
We have designed our antenna, GS antenna.
Now losses. One of the losses: free space losses, due to the distance of the s/c from GS, so it depends
on the radius. In this case the maximum distance we have from
ground r.

= -188.86dB

You can compute everything in dB, or without the logarithm and divide/multiply all this information.
If you use dB you need sum/difference, property of the logarithm.
POINTING LOSSES due to misalignment GS with the spacecraft. = -
0.347 dB

η is the accuracy you need in order to have the correct communication. Usually, if you don’t have
this data, you can assume +-0.1°. Then you need beamwidth of the receiver. Major losses are due
to the free space. Then point, cable and atmospheric losses lower.

ATMOSPHERIC LOSSES

Experimental, computed from the graphs. It depends on the direction s/c has with respect to GS,
rain, frequency in which you have the communication. With this kind of frequency, we’ll see the rain
attenuation are quite negligible, while the atmospheric losses are around 0.04 dB, so quite low,
because the frequency is not so high. Higher frequencies correspond to higher atmospheric losses.
Then last loss: cable losses, depend on the cables you have on the s/c or on the GS. In this case on
the s/c, it’s the s/c that is sending signal to ground. We take the worst case, that is -3dB, to constrain
the problem at the most.
Lcable = -3dB.
Now compute the EIRP effective isotropic irradiated power, as the sum of the transmitted power,
plus the gain of the transmitter antenna, plus the cable losses.
= 17.51
dB

Now you can compute the received power, so the power received by the GS antennas, doing the
summation of all the losses you have between the s/c up to the GS, so atmospheric losses, pointing
loss, space loss, plus the EIRP and the gain of the receiver.
=-128.45dB.

This is the power received from ground.


Last assumption if you don’t have the data: system noise density N 0. N0 represents the noise you
have on the ground antenna that is receiving the data, and it depends on the Boltzmann constant
and the temperature of the antenna. Higher is the temperature, higher is the noise. Reason why
station antennas are kept at low temperature!
For example, DSN from NASA has a temperature of 𝑇𝑠 = 21 K, very very low. If the spacecraft is the
receiver has a temperature usually of 250K. In this case I assumed a temperature of 100K, but it’s a
very conservative temperature. Of course, ESTRACK GS temperature is lower.

= -208.6 dB
Now I have everything. I can compute the error over noise ratio. Basically, the error over noise ratio
states what is the receiving capability to translate the signal and distinguish it from the noise. Maybe
your receiver is receiving some data but can’t manage to translate it in the correct information
because they are overwhelmed by noise.

= 36.96dB
Received power minus the noise density (at the denominator, computed in dB is a difference) minus
the 10log(R).
Now we have to check this ratio, from design and sizing is higher than the minimum ratio that we
need, determined by modulation and encoding plus a margin (typically 3dB). So, we need a ratio
that is higher at least 3dB with respect to the minimum ratio. If you take this graph and enter with
the BER, you see intersection with convolutional coding, the minimum ratio is around 4.5dB.
𝐸𝑏
( ) ~4.5𝑑𝐵
𝑁0 𝑚𝑖𝑛
Our ratio must be higher than the minimum plus a margin of 3dB.
𝐸𝑏 𝐸
> ( 𝑏 )𝑚𝑖𝑛 + 3𝑑𝐵 = 7.5𝑑𝐵
𝑁0 𝑁0
Of course, it’s ok. This ratio is quite high, we don’t need a such high ratio, so we cut something off.
For example, second iteration on the sizing: we can reduce the power, the transmitted power. Of
course, the gain will be lower. The ratio Eb/N0 will be lower. Maybe still high enough to overcome
minimum ratio + the margin.
Output you need: Eb/N0 and check is higher than the level you need. If it is lower you correct
something before, so maybe add some power/ choose bigger antenna/ change some other data. If
it is so much higher you can reduce something: reduce the power/ smaller antenna.
Last thing you need: compute signal over noise ratio. You need to compute the carrier modulation
reduction that is the power you need to do the modulation, it depends on the modulation index,
usually a value between 45° and 90°. The power losses due to the modulation depends on this index:

if you sum modulation plus receiver power you find the


carrier power, that is the total power we need to do the communication.

Now we can compute the signal to noise ratio, that is the carrier power minus system noise minus
10log(B) (beamwidth logarithm).

Once you compute this value, for each GS you need a minimum signal to noise ratio. For example,
DSN is 10dB (best GS in the world), maybe for another GS is higher, to be able to track the signal
and distinguish it from the noise.
You have the computed SNR, the minimum SNR. If this difference is higher than 3dB (typical margin)
the design is ok, otherwise you have to change something. In this exercise you’ll find SNR carrier
quite higher than the minimum. Quite high margin, you don’t need so much margin. Ok 3dB, you
don’t need 20/30 dB margin. You can iterate, reducing the power, changing the antenna…

12 maggio 2022. Slides: SSEO_L16_EPS_2022


ELECTRIC POWER SUBSYSTEM

Electric power:

1. Power source: it converts any energy source into electric energy


2. Energy storage: it stores the primary source energy excess to provide electric power supply
whenever the primary source is unavailable or for power peaks
3. Power distribution: Set of components (harness) devoted to the loads\sources interfaces
4. Power regulation and control: Set of components devoted to the power control in terms of
V and I to provide to the loads. Regulation is needed because of:
a. Loads requirements
b. Mission profile variability
c. Power source degradation
d. Batteries charge\discharge control
The functionalities of the EPS. Not just generating electrical power but also storing, that typically
means batteries (could not be the only case). Keep in mind in your mission whether there are
situations in which to size with respect to the energy instead of the power makes sense depending
on the scenario. And then these two parts of the architecture system: power distribution and power
regulation and control. How do you distribute it to the rest of load? Keep in mind that the battery is
a load as well, up to when it becomes the main source. The battery shall be recharged (secondary
batteries). As soon as you have your source you have to make your decision on how to distribute
and how to configure the architecture. This is true also for the isotope generator. Then how to
control whenever you have a power generation higher with respect to the demand or a demand
that is higher with respect to the power availability. Which kind of other parts of the equipment you
put in place in parallel or in series to do that? Either controlling the current or controlling the voltage
or looking at the maximum power. No just a matter of sizing but also putting them together and
looking how are managed with respect to the scheme that you have.
As always this is the problem to be solved, design to be obtained. You start identifying the
requirements. Basically, as for many other subsystems you have to fill up a budget with two different
perspectives. The power budget at the very first is like the pointing budget, so is the request, and
then at the end of the day it will become your capability. Power budget starts as a demand collecting
requests in term of phases, modes, and components at a power level. Then you do your design to
translate these needs into the real power budget breakdown that is your capability. The best
condition is when they match perfectly. If you cannot, it is a matter of looping, having a different
scheme of operations. But could be also a change in the profile mission pointing. Have in mind that
defining the power budget is one of the main final parts. It comes from whatever is the configuration
in terms of equipment of the satellite and the scientific payload. But also, somehow the
configuration in strict terms: is that a cube, is that a cylinder or etc., you can put something on the
body surface, you need wings, you have a center of mass that you can slightly touch and so insert
very heavy elements. Batteries are also elements very heavy. Shape and masses distribution.
Note: this seems to be obvious but don’t do this: I have a list of components, each of them has a
power demand, so the overall power demand will be the sum of all of them. This is the worst case
so I largely margin. This is not a well-done design. Enter a bit in the mode that you defined. It might
be the case that some components are switched off, for example in a commissioning the payloads
are off or on one by one to check. Or during a transfer it might be the case you don’t use high
precision sensor but only coarse sensors. Ok be worst but be realistic. Even an oversized sizing is a
bad design. Even if it is margined. At this point have in mind that the mission profile and the power
required. You can jump on asking yourself which should be the source in term of storage and in term
of primary sources. It is a very short mission or not? Does it make sense to put solar cells or just
batteries? Maybe better for the inertia matrix point of view, maybe better for the mass
management point of view. Or is a very long mission towards the Sun, does it make sense to have
solar cells or even RTG because of the temperature? I can start trading of at least the solutions in
terms of classes of generators and classes of storage systems, if required. (It might be the case, quite
rare, you don’t need any storage). At this point you can start sizing. Sizing means identifying the
main parameters that are depending on the solutions that you adopted, the efficiency, the numbers
of cells that ends up with, mass and areas in some cases. As you know now all the systems undergo
aging in space because of the environment. It’s up to you to make your decision whether the worst
situation so that you size according to the most demanding situation. Two elements plus one that
you have to consider are: beginning of mission and ending of mission. In some cases, this is related
to the Sun, near or far from the Sun. If I’m far from the Sun at the end of life it makes sense that I
will size the panels according to that. But I don’t know why in that situation the power demand is
the lowest (this is unrealistic). Imagine that the maximum power demand is when you are leaving
the Earth, at the beginning of life, near the Sun but you have the highest power demand and on the
other size the lowest power demand, aged elements on board and far distance from the Sun. It is
not always so trivial to identify which is the most critical situation. Keep in mind the aging and where
you are and with the rest of the elements that are in between. Of course, this is strongly related as
you know with your mission analysis, it is important when there are phases in which you want to
use the storage and not the generator. The first immediate condition is of course eclipses, this is the
first input to get from the mission analysis. The length in term of time with respect to the known
source availability, the Sun. Together with the power profile when you are in eclipse. And once more
have in mind that is not stack in the stones. Because if you turn out that the eclipse is so large that
you need a high capacity of the batteries because of the power required in eclipse, then you go to
who is setting up the operations in eclipse and you ask to move some operations in sunny. Keep in
mind that if your battery requires very high capacity this reflects to the solar panels, RTG etc. that
shall charge the storage system. These aspects are correlated. Don’t assume as always that there is
only one way to, you can also loop and ask the source of that constrain to change something to
make your design feasible. This is something that classically happens in eclipse. Apart the case in
which you have payload that works better in shadow then you concentrate some activities in
eclipses, and you consequently size your batteries just according to the TMTC requirements and you
leave the rest in sunny. This is not the only point to have in mind.
Tell me if you have in mind other cases in which you might need an energy storage that could
become the sizing scenario for a battery on board. When you arrive with your satellite built, you
physically have the possibility to precisely measure the mass of the system. This is not true for the
power, in the sense that you can of course measure your capability introducing power and energy,
but you cannot precisely know which would be the actual situation in space in terms of sources
energy. To have the batteries oversized is good to cover peaks of power demand, instead of
oversizing the solar arrays. Then another classical example is at the launcher release. When you
release the spacecraft if you have solar arrays, you might have them closed and folded. There is no
power source as soon as you are released by the launcher and so you need a battery in that time
window so that you open your solar panels and all the other systems required in that phase. Have
in mind that this could not be the only sizing condition. For example, if you are doing a fly-by or even
a hovering, so your mission is around an asteroid the eclipse provoked by the asteroid on the S/C
could be even almost zero because the size of the asteroid. The answer seems to be to have no
battery on board. It does not make sense. An example could be a fly-by in which you have few hours
to focus your science to nadir pointing this stone. So, the driver in this phase, which is the subsystem
that is driving the design of this phase. The phase that I’m describing is flying around an asteroid,
relative velocity expected quite high, quite small asteroid and so small eclipses. I have this object
and the must according to the resolution is point the object. So, you see in this case I don’t want to
care about my solar panels pointing. It will be the sizing phase for our battery capacity (with no
eclipse at all). Then of course at the very end that now you know: if and how it sized your storage
system, how you generate the power then you put in place your architecture that still depends on
the history with respect to the on/off of the loads. The power availability because keep in mind that
this might be composed by equipment that is active. This is the power demand of the power
subsystem.
This is a real power budget for a
Mars. All the subsystems not at
components level but at least at
equipment level. At the very end
you have a budget that depends on
the phase. Cells should be filled by
the rest of the teams so that you
have your overall situation per
modes, and you can select the
worst and sizing accordingly. At the
end you redo the same as I mention
you awfully being compliant. If it is
not the case it means you have tons
of other changes in terms of
operation, of modes or even
components. Some examples: the
processor is on whenever nominal
phase. While there was a difference
with respect to the X band and the
K band. Science or telemetry. Of
course, they are on and off in
different phases. Another line of
power that is always on is the one
for the heaters for the tanks.
Because you need to keep liquid the stuff in the tanks.

Electrical power sources

Primary power sources:

• Primary batteries
• Solar array
• RTGs (radio isotope generators)
• Fuel cells
• Solar dynamics
• Nuclear reactor

Power storage and secondary power sources:

• Secondary batteries (accumulators)


• Regenerative fuel cells

We are going to talk about the three first because are the largest typically used. Primary batteries
that mean that you cannot recharge them. Typically, they are quite effective in term of capacity but
then is one shot. We will see the limitation. Photovoltaic generation and thermionic generators with
the RTGs. For the storage rechargeable batteries. Here you see the main drivers for sizing the
batteries.
Primary power sources selection drivers:

• Power level and Power density


• Mission lifetime
• Source availability
• Sensitivity to the Sun distance
Secondary source needs and requirements

• Rad hardness (tolerance) (def: electronic circuits tolerance to radiation)


• Degradation
• Reliability\security requirements
• Cost
First, the efficiency that is in term of power for the solar cells, RTG is in term of conversion, and it is
in term of energy for the batteries. As I mention you before some technologies are out just because
of the lifetime. Remember that 1.3 KW per square meter proportional to the inverse of the square
of the distance. Is going down very quickly. The limit was Mars. Now there are the so-called low
intensity low temperature (LILT) solar cells that allows getting even over Jupiter and further away
with solar cell as well. The rad hardness is still a point. Of course, solar cells are semiconductor and
so they are critically affected by the radiations that you have. This is one of the reasons why you
have degradation in the solar cells,
different degradation with respect to the
flux. This comes out because of the
nuclear aspect, RTG for example. You
might have been taught that European
launcher don’t accept nuclear elements
on board. If you need to use any of those
you shall change your launcher. And so,
you might change the mission analysis,
the configuration and also the cost.
Let’s enter a bit in the technology. This
chart put in comparison the power that you might have from the sources with respect to the
applicability in terms of the lifetime of the mission.
For the primary batteries you have to distinguish when you have them on board switched off and
when you are using them as source of power. Because these are the two variables that lead you to
the selection of the technology. There are a class of technology of primary batteries that should be
used quickly even if they are switch on within hours or days otherwise, they deplete and there is no
way to recharge. Other kind of primary batteries that can last charged on board dormient for years
and then you can use for a lot of days or hours. Those that shall support the commissioning phase
(a few minutes or hours to initiate the activities and that’s it) or those that you use at the arrival,
like landers.
Here some numbers for sizing and qualitative aspects with respect to the sources that we have just
mentioned. I don’t want to enter in those too much. Let’s talk on the last two. Sensitivity to
spacecraft shadowing: whenever you have solar arrays and wings you shall pay attention on how to
mount them on the S/C. Seems stupid consideration but still done in the design. Difficult to have
them directly attached to the body. You typically have brackets that creates leverage between the
spacecraft and the wings. This is done for two reasons: the first could be to facilitate enfolding and
then to avoid the self-shadowing. This is valid for solar panels but not for the RTG that has other
complexity with respect to the configuration. The second aspect is the IR signature. If you are in a
military satellite, there could be of interest not be thermally detected. Same case for the solar
sources. Thermal generator is markable so could somehow be seen in IR. For solar arrays is a little
bit better.

Let’s start with our primary batteries. Classification:


Two class: Silver-zinc and lithium batteries. You charge them, you put on board and then whenever
you close the circuit you use in one shot and that’s it. The aspect that you have to look at is the
specific energy because batteries are one of the most massive components. Typically, is a good rule
to have them near the center of mass or to put them in different cluster according to the mass
distribution. They are usually a hot spot in the satellite so putting them in the configuration means
let’s say keep an eye on the whole state of thermal situation for the satellite. Another point that is
important is the temperature range in which they work. Long lifetime does not mean that you can
use such a battery on for years but means that you can store and keep it for years in your satellite
and then switch it on when you arrive wherever you arrive. Another important aspect is to verify if
they are completely solid or there are some gases in because they shall be vented otherwise. There
has two phases of relevance in that: for sure at the launch but also you might have in mind the point
of passivation at the disposal. Passivation means that you shall have no fuel in. Otherwise, you might
have some spurious actuation with the residual fuel or spurious internal disturbances because there
are some out gassing from the batteries that you have on board. You have to be sure that these two
phenomena are controlled and closed as soon as you start to put the end of life of the satellite. In
the last column you see the application. Silver-zinc used immediately classically in the launchers.
Lithium batteries instead used in long interplanetary trips.
Examples

→ Rosetta mission - lander Philae


Primary batteries, lithium-thionyl sulfate. What you can play with are the series and the
parallel of the cell. This is a voltage generator basically. Depending on how you put in series
or parallel you can increase the voltage.

→ Huygens
→ Hayabusa-2 – MASCOT

Note: the market of batteries is typically US. We have to gain in competitiveness.


If you are dealing with landers all components that you have on board shall also withstand the
structural loads at the impact. This is one of the parameters that you might to focus on when you
select your components. For primary batteries power as well. (Mechanically highly robust: shock
90g for 40ms.)

Skip some slides and jump to sizing: the battery, the RTG and the solar panels.
Secondary batteries sizing
Batteries shall give you an energy (Watts per time). This is related with from one side to the power
budget breakdown in terms of what is on during the eclipse (have in mind not only eclipse) and the
duration of the duration of eclipse itself. For the eclipses calculation it is possible to use simplified
model as the circular orbit or finer ones as Keplerian parameters.
The battery is sized computing its capacity as a function of the required power in eclipse and its
characteristic DOD. The capacity of a battery is computed as:

Where:

• 𝑃𝑒 : average eclipse load in Watt


• 𝑡𝑒 : correspondent maximum eclipse time in hours
• DOD: limit on battery’s Depth-Of-Discharge
• N: the number of batteries (at the very initial size you can use just one)
• 𝜂: transmission efficiency between batteries and load = f (T, C/rate) (depends on the kind of
batteries)
• 𝐸𝑑 : energy density per unit mass
DOD: how far you can deplete a battery. Even if you have battery that has a given energy or a given
voltage you shall not deplete at the minimum, but you want to use the energy that you have there
up to a given percentage of the whole
capacity.
Typical charge-discharge voltage profile:
• Charge at constant current
• Discharge at constant voltage

Depending on the number of cycles. In the chart there is the numbers of eclipses, but in general is
the number of batteries being the primary sources (when only batteries feed all the other
components). This is the percentage capacity you can use of the batteries because of the hysteresis
in the cycle of recharging the batteries. You might also have encountered the battery reconditioning,
that means I keep a bit of time to deplete completely the battery, to remove the chemical memory
of the battery and then I can start as having the battery for the new situations. In some missions
you saw in your operation a lot of time dedicated to the batteries recondition. Depending on the
lifetime of the mission, the number of the eclipses that you have, the lower is the possibilities to use
the capacity. For example, a GEO satellite that has two eclipses of 30 minutes per year at the
equinoxes might have a depth of discharge very high. Two cycles per year. A LEO satellite that has
15 orbit per days and so 15 cycles per days and last years of course rapidly jump in the 10^4 more

cycles. In that case batteries shall have smaller DOD, 20% or more. That means of course that you
are increasing the size and the mass of the batteries.
This is a size that you do for your batteries but then that is an input that is not disconnected to the
solar panels or the RTG. RTGs or the solar panels, so the primary sources, oversee that this energy
is given to the batteries. All your loads plus this when you size the primary sources. Pay attention
on the fact that the DOD that has been used depends on the temperature: the higher the
temperature the worst the situation is. Cross check on if the battery is thermally controlled if the
battery is used in a situation in which the thermal environment is hot. That could be the case in
which the DOD is different from the graph above. The same for the efficiency of the batteries. So,
the rate of discharge of the battery itself shall be enlarged because of the temperature.

Solar cells
The point for the solar arrays is to supply in the time in which you have the source, light, whatever
is needed all over the orbit. So, in this case: eclipse plus daylight or daylight plus the case I mention
before, for example the fly-by, in general a situation where you need to use the batteries. Simple
formulation to obtain what your solar arrays must produce (Watts).
First Step: identify worst case

• Maximum time in eclipse 𝑇𝑒 and correspondent time in daylight 𝑇𝑑


• Maximum total power requirements in eclipse 𝑃𝑒 and in daylight 𝑃𝑑
• Compute the total power required 𝑃𝑠𝑎 considering an efficiency factor in eclipse 𝑋𝑒 and in
daylight 𝑋𝑑 : to this end a Power Budget table for different modes must be filled. X less than
one.

𝑃𝑠𝑎 depends on the phase and on the mode. The timeframe in which you can get this power is the
only exposure to the light, so it’s normalized to the light part of the orbit. This is an input then to
size the panels. When I’m saying the panels: either winds or body mounted. The denominator X is
an efficiency, lower than one; I’m asking more than what it is actually asked. From the practical
hardware point of view there is a lowering in the efficiency in the energy conversion. In fact, this is
the case. When you have the solar panels, even true for the RTGs, imagine that I’m generating 10 A
but I have a load that ask me same voltage but just 5 A. I cannot overload the load, so I have to open
a new flow of current. So that I give the 5 A to the load. I inclined solar panels, so I reduce the power
that is generated, or we introduce in parallel between the load and the generator a resistor with a
switch. If need I deplete part of the current on this resistor. Or I put a converter, so I force the system
to convert the voltage at the needed point so that the current and the voltage required by the load
are corrected. This is a coefficient that tells me which kind of electronics or components I put in
between when the production is something different with respect to the request for the panels.
Something related with the battery. When I am in light the battery is still a load. If I am charging the
battery and this is not at a constant voltage it might be a problem. I could decouple it and I could
put an electronic that control the charge of the battery and this could be a first solution. I decouple
the battery during the charging and when it plays as a generator there is no control, the system will
follow the voltage of the battery. Or even more, a discharge regulator. I keep the loads in eclipse
independent from the fluctuations of the battery. That coefficient X means what I select in terms of
charging the battery: is that controlled, is that no controlled. If I have electronics in between they
ask for power and they ask for power conversion and the efficiency will be no more one. These are
the reasons why I’m increasing and margin the energy required for the eclipse and for the daylight.
Remember that these coefficients are related to the phase, mode, and epoch. Important to consider
the degradation of the components. Remember that we have a 𝑃𝑠𝑎 that is what is needed in a specific
phase/mode.
Second Step: identify power source characteristics

• For solar arrays compute the power generated at Beginning of Life (BOL): 𝑃𝐵𝑂𝐿

• 𝐼𝑑 is the inherent degradation factor (0.49-0.88) and 𝑃0 [W/m²] (radiance of the Sun depends
on the distance and on the phase…) is the specific power at 1AU for the selected solar cells
• Estimate solar array degradation factor 𝐿 𝑑

• Compute the power produced at End of Life (EOL) PEOL

• Compute the total area required


• Correspondent mass

The cell is mounted in this way. The efficiency in the


database is for this. Then you must include that you
have an adhesive over to cover for the outgasses,
ionization, thermal protection and you have to fix
them on the structural panels. All this entails having
a degradation in the efficiency. This is typically put
in place in the term 𝐿 𝑑. Taking also into account also
the cell efficiency as well. Silicon cells is in around
18%, Gallium arsenide triple-junctions around 30%.
Then we must consider the orientation. SAA: solar SAA 𝑛ത
aspect angle. If you have zero degree you have
maximum power. This is related to the attitude
control. It is fixed to the satellite, or you might have
a degree of freedom. Begin of life considers where
I’m, which technology I use and inclination.
d is a coefficient. It is the percentage of decay of efficiency with respect to 1 years in a given flux of
radiation. 2-3% per years. (1 – 0.03) ^10 for example. This coefficient weights the power that I can
get per unit area at the beginning of the components put in space. Now I have what is needed in
the epoch related to the phase, what I can have at that epoch. This is of course for the worst sizing.
Is the degradation more effective than the distance or is the overall power demand more effective
in terms of worst condition with respect to where I am, and which epoch of the mission is
happening? Generally speaking, for an Earth satellite is the end of like (because distance is always 1
AU). Once more this includes the power for the batteries.

RTG radioisotope thermoelectric generator


If you have an RTG the design is similar. For the batteries you shall include the energy from the
batteries. I show just this for the RTG because is the one that helps you identifying if the size makes
sense. Differently from the solar panels the RTGs at the time of your mission were basically equipped
as you bought. That was the only one with that mass and that efficiency (thermal to electrical).
Different from the solar panels because with solar panels you select the cells, the number of cells,

the strings in term of parallel or series. For RTG you select only this “object”.
The future is this. Trying to be modular in the slices of nuclear decaying elements. We are here

nowadays.

Thermoelectric generators include two phases:


1. Thermal energy generation through decay
2. Thermoelectric conversion (either static or dynamic)
For RTGs power degradation is an exponential function of
time due to the decay of the radioactive material (ex.
plutonium).
• 𝑃0 = beginning of life power
• 𝜏 = half-life period of the radioactive material
Example: Plutonium 238 has a power density (W/g) of 0.41
and a half-life of 86.8 years.

17/05

EPS EXERCISE

Sizing of Galileo solar arrays and batteries.


Before starting the sizing, it is important to understand what the elements of the subsystem are.
We have first of all power generator, that can be solar array or battery or RTG.
Important to understand and think about power distribution, power control and power regulator.
Power distribution: basically, it is all the internal hardware that
is needed to distribute the power from the source (solar array/
battery) to all the components and instruments of the satellite.
Most of it is composed by the cables. You can compute the
mass of the cables stochastically or when you have the length
of the cables and the type of the cable you can correctly
compute the mass of the cables. Probably it won’t be your case
because you won’t have the length of the cable, so do it
stochastically, statistically through graph like this or 15%, 25 %
of total EBS mass.
Then of course, based on power distribution you need to also select the working voltage. Because
if you work at a certain level of voltage, you can’t use some cables with respect to other cables.
➔ So usually when power demand is not higher than 2kW, usual voltage is 28V or also 50V
(we’ll see in Galileo. Power demand is less than 2kW and bus voltage is 50V)
➔ When you have higher power demand > 2kW you have higher bus voltage usually (100V to
150V)
Power regulation
We can have a subsystem that is fully regulated, quasi-regulated or completely unregulated.
Choice depends on the mission, power demand, so on. Of course, the best situation is when you
have a voltage excursion that is low. So, when you have a power system that is fully regulated/
quasi-regulated. But of course, adding this kind of regulation increases the mass and the complexity
of the subsystem. And also, we’ll see when talking about the power control, efficiency of the
subsystem is reduced because you need to dissipate some power, or lose some voltage, or some
current in order to keep it regulated.
Power control:

You have two different kinds of power control:


➔ Direct Energy Transfer (DET)
➔ Peak Power Transfer (PPT)
They are differentiated because one works with voltage, one works with current.

The DET basically keeps the voltage constant, while the PPT keep the current constant.
The DET is usually used for longer mission, because has a better efficiency. Efficiency of the Direct
Energy Transfer Power control in daylight Xd = 0.85; in eclipse Xe = 0.65.

PPT: Xd = 0.8; Xe = 0.6


Of course, this is a design choice and mostly depends on the loads you have on the spacecraft and
the voltage and current you need for your mission.
DET
You have Solar Array that produces power → you have
a voltage and a current that comes from power derived
from solar arrays. The DET works putting a shunt in
parallel.
If LOAD is the load, all the instruments and
components you need to power with solar array, if you
put something in parallel with solar array, this element
keeps your voltage constant. You are forcing the solar
array to lose some power because you keep voltage constant.

PPT
PPT works in the opposite way. You always have SA, you
put the PPT in series, so in this way you don’t care about
the voltage of the solar array, you just care about the
current.

We can see it quite easily here:


This is your current voltage curve for a solar array
(continuous line).
The dashed line is the curve for the request power from
the load, function of voltage or current of course.
If you need to power your instrument, component with
this kind of voltage Vx and this current Ix, what does it
means using one type of controller instead of the
other? If you use a DET, so put something in parallel
with SA you’re fixing the voltage. So, if you consider
solar array curve, at this voltage Vx I can have current
Isa, but I need Ix, so with this kind of control I lose all
this current. I dissipate all this current.
Exactly the opposite with PPT: I fix the current, so considering the solar array curve, I’ll be with this
value of voltage Vsa, so I’m dissipating the difference between Vsa and Vx that I need.
This is the main difference between the two system.

SOLAR ARRAY SIZING

Data you need:


Spacecraft time in daylight, spacecraft time in eclipse, power control efficiency in daylight and
eclipse, that depends on the power control system you use (PPT or DET), power demand in daylight,
power demand in eclipse. Then knowing the position of the spacecraft in the solar system I can
retrieve the irradiance of the Sun, so if we orbit around Earth, we have a certain value of irradiance.
If we orbit in Venus/Mars we have different power with the same solar arrays.
Then we need the mission duration, to size the solar array with respect to efficiency and degradation
of the cells.
DATA of Galileo

Paverage = 1.75 kW;

V = 50 V (required voltage)
GALILEO is quite long mission, sized for a duration of 12 years. And usually when we have longer
mission or higher power mission we use 100V or more for the distribution.
In this case we follow in the first case, small power demand with respect to 2kW, but they selected
50V bus.

Tlife = 12 y;

i= 56° (inclination of the orbit);

spacecraft time in daylight? (there is the reference)


Galileo orbit MEO orbit, quite big. Inclined of 56°, so we don’t have each orbit an eclipse time, it
depends on the period of the year. We have two seasons in which GALILEO spacecraft is in eclipse.
If you see the reference, you’ll see that the maximum duration of an eclipse is 60 minutes. Of course,
this duration changes during eclipse season, goes from 20 minutes to 60 minutes. We’re taking not
the average but the max value, but this happens just once during the season. So, we’re greatly
overestimated eclipse time duration.

Te, max = 60 min = 3600 s (spacecraft time in eclipse)


Torbital = 14 h 22 min = 51720 s (orbital period)

We have a DET control, so → Xd = 0.85; Xe = 0.65.


Last: we need power request in eclipse and in daylight. Let’s consider them equal and equal to the
average power.

Sun irradiance around the Earth. So, we can consider an average irradiance of:
Io = 1366.1 W/m^2.

We compute the power request to the solar arrays 𝑃𝑆𝐴:


𝑃𝑒 𝑇𝑒 𝑃
𝑃𝑆𝐴 = 𝑋𝑒 𝑇𝑑
+ 𝑋𝑑 = 2246𝑊 (He doesn’t know if the calculation is right).
𝑑

It’s the max power demand we can request to solar arrays.


We need to select the solar cells. Usual process when you design a mission and EPS subsystem. In
this case we already have the solar cells for the Galileo mission. We have GaAs cells, that have an
efficiency 𝜀𝐵𝑂𝐿 = 30% (at beginning of life), and a 𝑑𝑝𝑦 = 0.03/ year degradation. More or less, this
are the most common, average values for multi-junction cells. Here on Galileo, we have triple
junction cells.

Now we can compute the power at the beginning of life of the satellite and the specific power.
Specific power: 𝑃0 = 𝜀𝐵𝑂𝐿 ∗ 𝐼0 = 409.8 𝑊/𝑚^2

Depends on efficiency of the cells and the irradiance.


Specific power of solar arrays at beginning of life. We need inherent degradation 𝐼𝐷 = 0.77 and the
inclination angle: angle between solar arrays and the sun.

How to compute inclination angle 𝜃?


Assume it (overestimating it): we know the
inclination of the orbit (56°). Galileo points
towards Earth. Sun is inclined with respect to
equator on the ecliptic plane: 23.4°.
Overestimation of the inclination angle (quite
high): the angle
𝜃 = 56° − 23.4° = 32.56°. Of course, Galileo
doesn’t work with 32.56° of inclination angle. But
for now, overestimate the subsystem

𝑃𝐵𝑂𝐿 = 𝑃0 ∗ 𝐼𝐷 ∗ 𝑐𝑜𝑠𝜃 = 226 𝑊/𝑚^2


Probably he used 𝜃 = 25° for the computation. Quite impossible to have an inclination angle as
high. Usually 5°/10° at max inclination angle. 25° is high anyway. In this way we are overestimating
the surface of the solar panels. If polar orbit, the configuration, the axis on which you put solar
panels can change. Put solar panels on the direction to minimize the angle excursion between the
angle you need to point towards the earth and the angle you need to point the Sun. Consider at max
ecliptic angle 23° if you don’t have any information. But quite big, usually it doesn’t happen.

If we find solar panel areas that are larger with respect to the real one, this is one of the reasons.
Lifetime degradation of solar arrays depends on the lifetime duration of the mission and on the 𝑑𝑝𝑦
that is the degradation per year of a solar cell. 𝑇𝑙𝑖𝑓𝑒 in year = 12.

𝐿 𝑙𝑖𝑓𝑒 = (1 − 𝑑𝑝𝑦 )𝑇𝑙𝑖𝑓𝑒 = 0.632 , that is quite low. But we cannot change it.

Compute specific power of solar arrays at the end of the mission:


𝑃𝐸𝑂𝐿 = 𝐿 ∗ 𝑃𝐵𝑂𝐿 = 180.67𝑊/𝑚^2
We have the power we need at the end of life. We need a solar array surface that is able to produce
this power at the end of life. Estimate overall surface of solar panel: (Psa = power requested)
𝑃𝑆𝐴
𝐴𝑆𝐴 = = 12.43𝑚^2
𝑃0

If we see the real value of solar wings: wing is 1 x 5 m → 10 m^2 (5m^2 per wings)
𝐴𝑆𝐴 found is bigger with respect to real one. Overestimated solar surface.
Reasons:

- inclination angle overestimated


- Pe = Pd. We assumed the power demand in daylight and in eclipse is exactly the same. He
thinks they are quite the same, but it could be for your mission
- Time of eclipse. Considering a maximum time of eclipse of 60 minutes. But eclipse time is
sinusoidal, the max is 60 minutes. We could have sized with an average value for time
duration of eclipse, not the maximum one.
𝐴
Compute 𝑚𝑆𝐴 = 𝜌𝑆𝐴 if you have 𝜌𝑆𝐴
𝑆𝐴

We can refine this sizing. We cannot shape the cells in the way we want. We have to select some
cells with their size and their voltage. So, if we want to see the actual voltage and power produced:

Consider Galileo cells have a 𝑉𝑐𝑒𝑙𝑙 = 2.6𝑉 voltage, 𝐴𝑐𝑒𝑙𝑙 = 0.0035𝑚^2.


𝐴𝑆𝐴
𝑁 = 𝑐𝑒𝑖𝑙( ) = 3552 number of cells we need at least
𝐴𝑐𝑒𝑙𝑙

We need a bus voltage of 50V. single cell has a voltage of 2.6V. number of cells we need to put in
series to have at least the correct voltage.
𝑉𝑠𝑦𝑠
𝑁𝑠𝑒𝑟𝑖𝑒𝑠 = 𝑐𝑒𝑖𝑙( ) = 20.
𝑉𝑐𝑒𝑙𝑙
The real voltage we have: 𝑉𝑟𝑒𝑎𝑙 = 𝑁𝑠𝑒𝑟𝑖𝑒𝑠 ∗ 𝑉𝑐𝑒𝑙𝑙 = 52𝑉.
Not exactly the voltage we need. It means instruments on the spacecraft must be able to work at
this voltage or we need something that damps down the voltage, a control that takes this 2V.

Real number of cells we need: we need to always put a string of 20 cells → same voltage.
𝑁
𝑁𝑟𝑒𝑎𝑙 = 𝑐𝑒𝑖𝑙 ( ) ∗ 𝑁𝑠𝑒𝑟𝑖𝑒𝑠 = 3560
𝑁𝑠𝑒𝑟𝑖𝑒𝑠

𝐴𝑆𝐴,𝑟𝑒𝑎𝑙 = 𝑁𝑟𝑒𝑎𝑙 ∗ 𝐴𝑐𝑒𝑙𝑙 real surface of solar arrays

SECONDARY BATTERY OF GALILEO

Information we need to size the battery:


- Time window in which we have to use the battery, in this case eclipse time.
- Power required for the battery mode. In this case we use the average power
- Specific energy and specific density of the battery. Depends on the kind.
- Line efficiency: it depends on the power control. In this case during eclipse, with DET control
we have efficiency Xe = 0.65
Galileo:

Li-ion battery → specific energy 𝐸𝑚 = 140𝑊ℎ/𝑘𝑔

➔ Specific density 𝐸𝑉 = 260𝑊ℎ/𝑑𝑚^3


➔ 𝑉𝑐𝑒𝑙𝑙 = 3.6𝑉
➔ 𝐶𝑐𝑒𝑙𝑙 = 7.6 𝐴ℎ (specific capacity of the cell)
DoD: DEPTH OF DISCHARGE
One of the most import parameter to compute the capacity of the battery. Strongly depends on the
number of cycles that we need, on the type of batteries we have. Usually, statistical methods to
compute.
Graphs in which DoD in function of Cycle Life, that is the number of charge-discharge cycle that we
need for the mission. Higher is the number of cycles, lower is the DoD. For Lithium batteries usually
it goes from 40% to 60%.

In this case, we have a little bit than 100 eclipse for year. 12 years mission → we need 1200 cycles.
Not a great number. For this reason: Li-ion battery + low number of cycles → we assume DoD = 60%.

It is an assumption.

Primary and Secondary batteries characteristics on the slides.


Required capacity we need for the battery depends on the eclipse time, the power demand, the
DoD, the efficiency of the battery and the number of batteries.
For the number of batteries if you don’t have any data, maybe you need to iterate. Usually for small
satellites is just one battery, but for bigger satellites we need more.
In this case: N = 6

𝜂 efficiency is Xd
𝑇𝑅 𝑃𝑅
𝐶= = 747.87𝑊ℎ
(𝐷𝑜𝐷)𝑁𝜂
Thanks to specific energy and specific density compute the mass of the battery and the volume
(important for the structure and the configuration). One of the outputs you need for the other
subsystems.
𝐶 𝐶
𝑚 𝑏𝑎𝑡𝑡 = 𝐸 , 𝑉𝑏𝑎𝑡𝑡 = 𝐸𝑉
𝑚

𝑉
𝑁𝑠𝑒𝑟𝑖𝑒𝑠 = 𝑐𝑒𝑖𝑙(𝑉𝑠𝑦𝑠 ) = 14 : number of cells we have to put in series to get the correct
𝑐𝑒𝑙𝑙
voltage. (𝑉𝑠𝑦𝑠 = 50𝑉). Cells work at a given voltage
𝑉𝑟𝑒𝑎𝑙 = 𝑁𝑠𝑒𝑟𝑖𝑒𝑠 ∗ 𝑉𝑐𝑒𝑙𝑙 = 50.4𝑉 → very good result! Choice of this batteries is quite good to have
a bus voltage of 50V.
Capacity of a single string: (a string is 14 cells put in series)
𝐶𝑐𝑒𝑙𝑙 is the capacity of a single cell in Ampere hour. 𝜇 is the efficiency, depends on the battery,
number of cells we’re putting together, usually around 0.8. In this case assume 0.8.

𝐶𝑠𝑡𝑟𝑖𝑛𝑔 = 𝜇𝐶𝑐𝑒𝑙𝑙 ∗ 𝑉𝑟𝑒𝑎𝑙 = 306.4 𝑊ℎ

At this point we have the total capacity (𝐶) we need from the battery and a single cell capacity. In
order to obtain at least this value, we need to put different strings of cell battery in parallel.

𝐶
𝑁𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 = 𝑐𝑒𝑖𝑙 ( )= 3
𝐶𝑠𝑡𝑟𝑖𝑛𝑔
Real capacity we have with 3 strings:

𝐶𝑟𝑒𝑎𝑙 = 𝑁𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙 ∗ 𝐶𝑠𝑡𝑟𝑖𝑛𝑔 = 919.2𝑊ℎ ≫ 𝐶

We have enough more capacity with respect to the one we need. What can we do? We are wasting
almost 200Wh.
We can iterate to reduce real capacity of the battery, changing the type of battery. Changing specific
density, so the mass and the volume, the voltage of the cells, in order to have a real capacity that is
as equal as possible with respect to the capacity we need.
Margin: you could margin the power demand in the preliminary sizing.

RTG

Sizing of RTG quite straight forward. They come standardizes. Fixed mass, power, efficiency.

➔ GE-RTG (General Electric RTG): m = 56 kg, P = 4400 W, 𝛈 = ~0.068


➔ MM-RTG (Multi Mission RTG): m = 43 kg, P = 2000 W, 𝛈 = ~0.063
So, with respect to the power we need for our mission we choose if we put one or more RTGs.
Information needed for the sizing: power demand, lifetime
duration, mass of RTG, nominal power of RTG module, conversion
efficiency (last 3 depends on the kind of RTG), half-life decay
period (depends on the radioactive source. Usually, plutonium →
half-life decay period = 86.6 years)
Multiply the required power for the mission for the efficiency of the RTG, obtain power at beginning
of life.
Compute power at the end of life, it depends on the half-life decay and duration of the mission.
Then, understand how many devices you need to satisfy the power request
at the end of life.

At the end find the overall mass of RTGs

ADCS EXERCISE Design process


ADCS at first mission requirements; some data from spacecraft: inertias, dimensions, so on, you
need some inputs to size; quantify disturbances with respect to environment; then knowing the
environment so the disturbance and the maneuvers / desaturation of RW you need to do, you’re
able to size and select all the ADCS hardware, in terms of actuators for the control and sensors for
the determination. Then when you do a very fine design process (not request by you for reverse
engineering), when you have all actuators and sensors you need also to define which are the
algorithms you want to use in order to retrieve the information from the sensors (ADS algorithms)
and how you want to generate the inputs for the actuators (ACS).
You need some inputs. When you are preliminary sizing all the subsystems, at least you have to
define the main axes of the satellite, so pitch, roll and yaw axis, direction with respect to body of
the satellite.

You need to know the mass to compute maneuvers (delta v), dimension, …
There are some stochastic formulas to preliminary size the length
surface inertia momentum of the satellite. Totally statistical
formulas. If you don’t have any another data start from them.

DISTURBANCES

4 kinds: GG, SRP, aerodynamic drag, magnetic field


Remember all the disturbances can be constant or cyclic, it depends on the pointing you want to
do.
If s/c is constantly pointing Earth, GG is constant, because it’s a disturbance acting in the direction
s/c-Earth. If s/c points the Earth, the disturbance is constant
If inertial pointing: disturbance occurs cyclic over the orbit of the satellite.
Also, for SRP: if one axis points the sun (ex: for solar arrays), SRP you have is constant. If Earth
pointing, the SRP is cyclic

Aerodynamic Drag: constant for Earth pointing, variable is inertial pointing.

Magnetic field is always cyclic.


Consider this characteristic of disturbance.
Then you have internal disturbances. If you have some disturbance, you know could be quite big:
maybe you have very big tank and you know you may have sloshing, you can consider some of
internal disturbances. But preliminary you can neglect them.
ACTUATOR SIZING: RW
Important to understand difference you need to counteract disturbances and perturbation and the
torque you need to do the maneuvers. With RW you can do 2 things: counteract external
disturbances and perform slew maneuvers.

Formulas. From where?


To counteract external disturbance 𝑇𝐷 , you have disturbance margin factor for the reaction wheel
𝜂 that increases the torque needed by RW
𝑇𝑅𝑊 = 𝜂𝑇𝐷
This torque during the lifetime increases in time. You accumulate angular momentum in the RW.
You need to desaturate RW with other actuators, thrusters or magnetorquers.
Formulas for slew manoeuvre:
𝐼
𝑇𝑅𝑊 = 4𝜃
𝑡2

Angular momentum of the satellite is inertia momentum times angular velocity of the satellite.

THRUSTERS

𝑇 torque. 𝐹: thrust we need. 𝐿: arm. 𝑛: number of thrust


𝑇 = 𝐹𝑛𝐿

Torque also equal to inertia of the spacecraft times the angular acceleration of the spacecraft:
𝑛𝐹𝐿
𝑇 = 𝐼 ∗ 𝜃̈ → 𝜃̈ =
𝐼𝑆𝐶

Integrate: angular velocity


𝑛𝐹𝐿
𝜃̇ = 𝑡
𝐼𝑆𝐶 𝑚

Integrate: overall angular manoeuvre


1 𝑛𝐹𝐿 2
𝜃= ∗ 𝑡
2 𝐼𝑆𝐶 𝑚

If you fix the time manoeuvre.

Formula you need to size the angle and the time.


If you know data of thruster, so the specific impulse, you can compute the fuel needed for the
manoeuvre,
𝑛𝐹𝑡𝑚
𝑚𝑓𝑢𝑒𝑙 =
𝐼𝑠𝑝 𝑔0

You can have at least two kinds of slew manoeuvres:


x axis: time, y axis: acceleration

You can do a manoeuvre in which you accelerate, then


coast (your acceleration is constant) and then you
brake (decelerate). Usually, the time of acceleration is
equal to the time of break. Then you can decide t coast.
The angle you reach the manoeuvre is the angle
reached in the acceleration phase + coasting + break
𝜃𝑚 = 𝜃𝑎𝑐𝑐 + 𝜃𝑐𝑜𝑎𝑠𝑡 + 𝜃𝑏𝑟𝑒𝑎𝑘

Typically acceleration and break are the same.


𝑛𝐹𝐿
𝜃𝑐𝑜𝑎𝑠𝑡 = 𝜃̇ ∗ 𝑡𝑐𝑜𝑎𝑠𝑡 , with 𝜃̇ = 𝐼𝑆𝐶
∗ 𝑡𝑎𝑐𝑐 (you integrate, you have reached a certain value of
acceleration)
1 𝑛𝐹𝐿
𝜃𝑎𝑐𝑐 = 𝜃𝑏𝑟𝑒𝑎𝑘 = 2 𝐼𝑆𝐶
∗ 𝑡𝑎𝑐𝑐 2 (integrate the angular velocity)

𝑛𝐹𝐿 𝑛𝐹𝐿
𝜃𝑚 = 𝐼𝑆𝐶
∗ 𝑡𝑎𝑐𝑐 2 + 𝐼𝑆𝐶
∗ 𝑡𝑎𝑐𝑐 ∗ 𝑡𝑐𝑜𝑎𝑠𝑡 → overall angle of the slew manoeuvre

You could also have a slew manoeuvre with no coasting phase. Just accelerate and decelerate

You can also do MOMENTUM DUMPING of RW with thruster.

ADCS – sizing
1.1 THRUSTERS
The torque generated by a thruster system is quite simple and depends on the available thrust, on
the number of thrusters and on the arm:
𝑇 = 𝑛𝐹𝐿
Torque is also equal to the product of momentum of inertia and the angular acceleration (2 nd time
derivative of the angle that we can span with the maneuver):

𝑇 = 𝐼𝑠𝑐 𝜃̈
Solving this equation for the angular acceleration and integrating over the acceleration time, it is
possible to retrieve the angular velocity and the angle spanned during the thrusting phase:
𝑛𝐹𝐿 𝑛𝐹𝐿 2
𝜃̇ = 𝑡 𝜃= 𝑡
𝐼𝑠𝑐 𝑏 2𝐼𝑠𝑐 𝑏

Where 𝑡𝑏 is the burning time.


With thrusters we can do a lot of maneuvers:

- Counteract disturbances (also with magnetorquers, reaction wheels, ect.) -> the level of
thrust you need plus margins is:
𝑇𝐷
𝐹𝑡ℎ =
𝑛𝐿
Where n is the number of thrusters and L is the thruster’s moment arm.
Usually the force needed is very small, therefore the slewing rate is the sizing maneuvre =>
using thruster to counteract cyclic disturbances uses much fuel.

- Slew maneuver (to achieve a desired angle) can be achieved in different maneuvering
sequences:
o Acceleration (constant) + coasting + brake (constant) -> 𝜃𝑚 = 𝜃𝑎𝑐𝑐 + 𝜃𝑐𝑜𝑎𝑠𝑡 + 𝜃𝑏𝑟𝑎𝑘𝑒
Starting from a value of 𝑡𝑎𝑐𝑐 and 𝑡𝑐𝑜𝑎𝑠𝑡 it is possible to compute the angles:
𝑛𝐹𝐿
𝜃𝑐𝑜𝑎𝑠𝑡 = 𝜃̇ 𝑡𝑐𝑜𝑎𝑠𝑡 = 𝑡 𝑡
𝐼𝑠𝑐 𝑎𝑐𝑐 𝑐𝑜𝑎𝑠𝑡
𝑛𝐹𝐿 2
𝜃𝑎𝑐𝑐 = 𝜃𝑏𝑟𝑎𝑘𝑒 = 𝑡
2𝐼𝑠𝑐 𝑎𝑐𝑐

o Acceleration (constant) + brake (constant) -> 𝜃𝑚 = 𝜃𝑎𝑐𝑐 + 𝜃𝑏𝑟𝑎𝑘𝑒


𝑛𝐹𝐿 2
𝜃𝑎𝑐𝑐 = 𝜃𝑏𝑟𝑎𝑘𝑒 = 𝑡
2𝐼𝑠𝑐 𝑎𝑐𝑐

𝜃𝑚 𝐼𝑠𝑐
On the other way around, we can compute 𝑡𝑚 = 2𝑡𝑎𝑐𝑐 = √ .
𝑛𝐹𝐿

OSS: Total view angle is the sum between the angle spend in acceleration phase and the
angle span in braking phase.

At the end, the force needed for a slew maneuver depends on the required slew rate:

𝑇𝑠𝑐𝑙𝑒𝑤 𝐼𝜃̈ 𝜃
𝐹𝑡ℎ = = 𝜃̈ =
𝑛𝐿 𝑛𝐿 𝑡𝑠𝑙𝑒𝑤 𝑡𝑠𝑙𝑒𝑤,𝑎𝑐𝑐
OSS: We can have different unknowns: if we know the wanted slew angle we can impose the
time for the maneuvre and we can find the thrust needed (because all other parameters are
known) or we can find the maximum angle that we can do for a certain thrust level.
- Momentum dumping (of reaction wheels) -> depends on the stored angular momentum and
the burn time needed (time in which you want to desaturate the wheel):

𝐹𝑡ℎ =
𝑛𝐿𝑡𝑏𝑢𝑟𝑛
- Desaturation of reaction wheels
To estimate the propellant mass used by the thrusters system we shall estimate the total pulse
number and the length considering maneuvers, disturbance counteract (if necessary) and
momentum dumping:

𝑡𝑝𝑢𝑙𝑠𝑒𝑠 = 𝑝𝑠𝑙𝑒𝑤 𝑡𝑠𝑙𝑒𝑤 + 𝑝𝑑𝑢𝑚𝑝 𝑡𝑑𝑢𝑚𝑝

Then the propellant mass is:


𝐼𝑡𝑜𝑡 𝑡𝑝,𝑠𝑙𝑒𝑤 𝐹𝑡ℎ ,𝑠𝑙𝑒𝑤 + 𝑡𝑝,𝑑𝑢𝑚𝑝 𝐹𝑡ℎ,𝑑𝑢𝑚𝑝
𝑀𝑝𝑟𝑜𝑝 = =
𝐼𝑠𝑝 𝑔 𝐼𝑠𝑝 𝑔

Where 𝐼𝑡𝑜𝑡 is the total impulse, computed as the product of the total number of pulses and the
relative length (it changes depending on the maneuver), and 𝐹𝑡ℎ id the force needed for the
maneuver.
One of the last maneuvers that you can do with thrusters is re-orienting a spin stabilized s/c (it keeps
rotating around the axis that has the greatest angular momentum) maintaining the same angular
velocity -> this configuration is used when you want to counteract directly the disturbances (because
it is a quite stable configuration) but at the same time you create more stiffness to the s/c and when
you want to reorient the spin axis you need to use thrusters. The thrust required is:
𝐼𝑠𝑐 ,𝑠𝑝𝑖𝑛 𝜔Δ𝜃
𝐹𝑡ℎ =
𝑛𝐿𝑡𝑡ℎ
You also need to consider the delta theta (angle of spin axis) you want to achieve, given by the ratio
between the control angular momentum and the actual angular momentum of the spin stabilized
s/c:
𝐻𝑐𝑜𝑛𝑡𝑟𝑜𝑙 𝑛𝐹𝑡ℎ 𝐿𝑡𝑡ℎ
Δ𝜃 = =
𝐻𝑠𝑝𝑖𝑛,0 𝐼𝑠𝑐 ,𝑠𝑝𝑖𝑛 𝜔

OSS: If we fix the delta theta we can find the thrust that we need

1.2 REACTION WHEEL


With reaction wheel we can always counteract disturbances, and the torque needed is the product
between a disturbance margin factor (depends on which reaction wheel you are using) and the
disturbance torque:
𝑇𝑅𝑊 = 𝜂𝑇𝐷
Considering a slew maneuver with constant acceleration and braking, the slew angle 𝜃𝑚 is computed
exactly as before (thrusters) but here instead of 𝑛𝐹𝑡ℎ 𝐿 (torque generated by thruster) we have the
torque generated by the reaction wheel.
As for the thruster, we have seen that we can do a difference phase for a single slew maneuver
(disegno 37:24); this happens also for the reaction wheel (disegno 37:44+testo), from the graph I
understand that it is possible to achieve half of the angle i half of the time.

𝜃𝑚 𝑇 𝑡𝑚 2 𝐼𝑠𝑐
= ( ) → 𝑇 = 4𝜃𝑚 2
2 2𝐼𝑠𝑐 2 𝑡𝑚
To estimate the momentum storage needed to counteract the gravity gradient disturbance (in low
orbit around earth is the most important disturbance) we can use an approximated formula
(considering the worst case disturbance torque over a full orbit):
𝑡𝑜𝑟𝑏𝑖𝑡
ℎ𝑅𝑊 = 𝑇𝐷 (0.707)
4
Where 0.707/4 is the approximation that represents the average value for a sinusoidal curve
(approximates the disturbances in n orbital period).
1.3 MAGNETIC TORQUERS
The torque generated by the magnetic torquers depends on the magnetic dipole generated by the
electrical current passing through the torquer.
Magnetic torquers can be used for dumping the wheels but also for counteracting the external
disturbances; in the latter case the dipole needed is:
𝑇𝐷
𝐷=
𝐵
Where B is the Earth magnetic field. need the magnetic field.
In case of momentum damping, 𝑇𝐷 is given by the sum of peak disturbance and the margin to
compensate the lack of complete directional control.

EXERCISE – SIZING OF THRUSTERS AND REACTION WHEELS


Consider a s/c with 6 thrusters (2 on each axis) and 3 reaction wheels with and a total angular
momentum storage 𝐻𝑅𝑊 = 0.4 𝑁𝑠 and a harm 𝐿 = 45 𝑚.
We need to counteract external disturbances in LEO -> gravity gradient: 𝑇𝐷 = 4.5 × 10−5 𝑁

We want to perform a slew maneuver of 𝜃𝑚 = 45° in 𝑡𝑚 = 1 min 30 𝑠 = 90 𝑠 with a sequence of


accelerating (10% of time), coasting (80% of time) and braking (10 % of time).
We also know that the orbital period 𝑇 = 120 𝑚𝑖𝑛 and the momentum of inertia of the s/c is 𝐼𝑠𝑐 =
90 𝑘𝑔/𝑚3 .
Moreover, we have 12 slews (𝑛𝑠𝑙𝑒𝑤 ) per year for a lifetime of 10 years (𝑡𝑙𝑖𝑓𝑒 ).
To counteract disturbances, we can use reaction wheel and therefore we can compute the storage
angular momentum:
𝑇𝐷 𝑇
ℎ𝑅𝑊 = ∗ 0.707 = 5.7 × 10−2 𝑁𝑚𝑠
4
To compute the slew maneuver, we can define:
𝑡𝑎𝑐𝑐𝑒𝑙𝑒𝑟𝑎𝑡𝑖𝑜𝑛 = 9𝑠
𝑡𝑐𝑜𝑎𝑠𝑡𝑖𝑛𝑔 = 72 𝑠

And consequently, the slew angle is:


𝑛𝐹𝐿 1 2
𝜃𝑠𝑙𝑒𝑤 = 𝜃𝑎𝑐𝑐 + 𝜃𝑐𝑜𝑎𝑠𝑡 +𝜃𝑏𝑟𝑒𝑎𝑘 = (𝑡𝑎𝑐𝑐 ∗ 𝑡𝑐𝑜𝑎𝑠𝑡 + 2 ∗ 𝑡𝑎𝑐𝑐 )
𝐼𝑠𝑝 2
From which 𝐹 = 0.7 𝑁 (quite small for shape and moment of inertia of the satellite; to increase this
value one possibility can be perform the slew in a shorter time).
The thrust given by the thrusters can be retrieved also in another way; let’s assume (it’s an
approximation) that the angular velocity we need is:
𝜃𝑠𝑙𝑒𝑤
𝜃̇ = = 0.5°/𝑠
𝑡𝑠𝑙𝑒𝑤
Consequently, the angular acceleration is:

𝜃̇
𝜃̈ = = 9.7 × 10−4 𝑟𝑎𝑑/𝑠 2
𝑡𝑠𝑙𝑒𝑤
Finally the torque that we need is:

𝑇 = 𝐼𝑠𝑐 𝜃̈ = 𝑛𝐹𝐿
Form which the thrust is:

𝐼𝑠𝑐 𝜃̈
𝐹= = 0.017 𝑁
𝑛𝐿
The result we obtain is lower than the previous one but this does not contains margins, so at the
end we can choose the one we prefer keeping in mind that in the latter case we shall add margins.
For the momentum damping we need to compute the number of orbits after which the wheels are
saturated:
𝐻𝑅𝑊 0.4
𝑛𝑜𝑟𝑏 = = ≃ 70 𝑜𝑟𝑏𝑖𝑡𝑠
ℎ𝑅𝑊 5.7 × 10−2
Since we don’t want to completely saturate the wheels we can take out 20% of margin such that
𝑛𝑜𝑟𝑏 = 56 𝑜𝑟𝑏𝑖𝑡𝑠.
To understand how many pulses you need in 1 year we need to know the time after which we need
to desaturate the wheels:
Δ𝑡𝑠𝑎𝑡 = 𝑇 𝑛𝑜𝑟𝑏 = 6720 𝑚𝑖𝑛 = 4.67 𝑑𝑎𝑦𝑠

To be conservative we can assume that the wheel saturates after 4 days.

The total pulses are the product between the number of pulses for each maneuver (2 pulses for
each slew: one for the acceleration phase and one for deceleration), the number of maneuvers per
year and the number of years:
𝑝𝑡𝑜𝑡 = 𝑝𝑠𝑙𝑒𝑤 𝑛𝑠𝑙𝑒𝑤 𝑡𝑙𝑖𝑓𝑒 = 2 ∗ 12 ∗ 10 = 240 𝑝𝑢𝑙𝑠𝑒𝑠s

The propellant mass can be computed as:


𝐼𝑡𝑜𝑡 𝑝 𝑡 𝐹 + 𝑝𝑑𝑢𝑚𝑝 𝑡𝑑𝑢𝑚𝑝 𝐹𝑑𝑢𝑚𝑝
𝑚 𝑝𝑟𝑜𝑝 = = 𝑠𝑙𝑒𝑤 𝑠𝑙𝑒𝑤 𝑠𝑙𝑒𝑤 = 1.05 𝑘𝑔
𝐼𝑠𝑐 𝑔0 𝐼𝑠𝑐 𝑔0
Where we assumed 𝐼𝑠𝑐 = 200 𝑠, 𝑡𝑠𝑙𝑒𝑤 = 𝑡𝑎𝑐𝑐 + 𝑡𝑏𝑟𝑎𝑘𝑖𝑛𝑔 = 18 𝑠, 𝐹𝑠𝑙𝑒𝑤 = 0.7 𝑁, 𝑡𝑑𝑢𝑚𝑝 = 5 𝑠 and
𝐻𝑅𝑊
𝐹𝑑𝑢𝑚𝑝 = = 0.08 𝑁.
𝑛𝐿

Sbobina 23 maggio

EPS

Secondary batteries
Secondary batteries are used for helping the primary
one when the needed power exceed the available one,
for example in case of eclipse. Battery cell packaging
must be accounted,
usually it does not
exceed 20% of
battery volume. Nowadays, almost all batteries are based on
lithium technology, but before there was cadmium etc. Lithium
batteries with respect to traditional batteries have a high energy
density, optimal energy efficiency, but low thermal dissipation
factor.
Solar arrays
Solar arrays are the final product capable of
produce electrical power (via the use of solar
cells), structural capable of support loads and
coated with a protective material for external
agents. Companies that produce solar cells
might not do the assembly of the solar panel
and other companies may only assemble but
not produce. Each solar cell is connected to the other through cables and electrical arrays. For this
reason, the effective area for power production is not the whole surface of the solar panel.

Solar cells used may be:


- Mono crystalline silicon cells. Performance and efficiency (20 %) are acceptable and the
degradation per year is quite low (2-7%). Typically, if the efficiency is good there is a high
degradation per year, while if the efficiency is low, the degradation is low as well.
- Multi-junction. Well proven technology with a low degradation factor (3-7 %)
- Thin-film cells. Technology still in development, it grants a high power-over-mass ratio.
Product used in Rosa mission.

Multi-junction
Multi-junction cell is sensitive to different parts
of electromagnetic spectrum, covering all
together from UV to IR wavelength. For silicon
cells, only a part of the spectrum is absorbed.
Solar cells are current generators, no
computation is asked by Lavagna. Each point of
the scheme V-I is representative of a point of

function of the cell.


The curve needs to be defined accordingly to the
temperature since it is highly dependent on it. A cold
cell is more efficient than a hot one. Another big
variable is due to radiations that lower the efficiency.

In series difference of potential is increased, while in


parallel the current is increased.
Diodes
Blocking diodes are used to protect from string failure or low generation for parallel case. Put in
series to avoid reverse currents in the string.

By-pass diode is used to protect string in penumbra or failure among arrays in series.

Top graph is the efficiency variation in years.

Companies that produce solar cells/arrays are:

• Azurespace ([Link]
• Spectrolab ([Link]
• Cesi ([Link]
Cesi is based in Lambrate,
near Milan.

Efficiency variations:
Theta angle is the angle
between the sun and the
normal of the cell plane.
Radiation effects the
efficiency of generation. Solar
intensity is linked to the
inverse square distance, while
the temperature of the cell
dictates the efficiency: hot is less efficient. For that, a thermal control must be added.
LEO with respect to GEO have more protons free to
impact the solar cells, so the effects of degradation
are higher.

A solar array is composed of a cover glass, used for


thermal control,
reflection reduction,
debris & atomic
oxygen protection; adhesive for junction of the layers; the cell itself;
structural & thermal part, where rigidity is important (that’s why
usually a honeycomb is used, configuration + carbon fiber), heatsink
for thermal heat transfer. There are two typical packaging types, one
is the flat packaging that is the most used, its economical, but it has
wires connecting the cells. The other one is the shingling packaging,
where wires and connective stuff is covered by the slight angle of the
cells, guaranteeing a high packaging factor, lowering the mass, but it has a higher cost and harder
maintenance.

Concentrators
Devices that increase the solar flux on cells.
There are two ways:

- Lateral mirrors, reflecting the lateral


radiation coming toward it.
- Lenses, concentrating the flux in a
smaller area.
The drawback is the temperature increase, higher attention on heat transfer and thermal control.

In Deep Space 1 NASA mission used a Fresnel’s lens, flexible lens for concentrating power.
Flexible solar cells, ROSA mission and DART mission tested. Cells are rolled in, for that a minimum
radius for fragility is considered. Very brittle material.
Radioisotope generator RPG
Radioisotope generator RPG is another type of energy source. It generates power through the
Seebeck’s effect.
Up to now RPG technology is quite versatile, leading to a power output that is sized according to the
power requested, however the higher the power, the higher the mass. The efficiency of this
technology is quite low (7%). This brings to two criticalities: a high request in power leads to a high
mass; high thermal output that must be
dissipated. For the last reason, usually it is
located outside of the S/C itself. This type
of generator must be capable of containing
radioactive material in case of explosion of
the launcher. Radiations must be contained
during the mission, right from the start
during launch.

The selection of radioactive material is


strictly related to the mission life and to the material time to decay. Plutonium is chosen for its high
decay time.
Multi mission RTG (MMRTG) is identical to RTG but it is squeezed, it has the same efficiency, but
different mass and power. This is mounted to many rovers, for example in Curiosity. The next
generation RTG has higher efficiency and different tiles.
Stirling RTGs (STG) use thermal energy to produce a Stirling cycle where mechanical energy is
converted into electrical. The efficiency increases up to 25%, leading to more or less the same one
of solar panel. The criticality is that it has a lot of moving parts. It is suggested to operations on
surfaces such as landers or rovers.
The main difference between the technologies explained before (MMRTG and SRG) is mainly due to
the efficiency.
Once activated all the technologies connected to the decay of radioactive materials cannot be
switched off. Power is available since the start.
Efficiency also depends on the environment in which the technology is used. For example, in deep
space there is no convection, only irradiation, while on Mars there is also that kind of heat exchange.

Power conditioning and distribution


There are two ways of power
conditioning:

- Direct Energy Transfer (DET)


- Peak Power Tracking (PPT)
In DET conditioning, a shunt is put in
parallel to the source to dissipate
exceeding power. It is a tested
technology, well suited for medium long duration mission (>5 years). Voltage is kept constant, while
the control is on the produced current. The problem is when there is not enough power generation.
Depending on which is the configuration of the cells, if the power available is not enough, the system
cannot operate. If there is a battery, then the missing power could be given by that, otherwise a
part of the loads must be switched off.
Sunlit regulated bus. Battery has a control in charging, but not in discharge. If I have a high-power
demand, then the battery will be the power source, but the system will be dictated by the voltage
of the battery up to its discharge. As soon as it enters an eclipse, the battery could be discharged
and not operative. A way to solve this problem is to put a discharge system capable of maintaining
operative the S/C even if an eclipse is expected to arrive or to oversize the solar panels.
PPT regulator is an active control that acts on the voltage: a DC/DC converter is put in series for
tracking power demand; It is considered a load since it absorbs 4-7 % of power; Heavier and less
efficient, it is well suited for short duration missions since it exploit better the discharge regulator,
but consumes power.

Thermal control system


Conduction and radiation are the main heat transfer methods in space, convection is important only
in atmospheres (Mars, Earth etc.).
The power exchanged depends on the
difference in temperature between the
source and the body, linearly for conduction
and with a power of 4 for irradiation.
Absorbtivity and emissivity represent the
same coefficient: ratio between power
emitted or absorbed and incoming power,
the difference is the wavelength,
absorbivity plays in the lower frequency
band while the emissivity in the higher.
Radiation is the most important heat power
exchange method in space. Radiation depends on
the surface and its coefficients, but also on the
geometry of the S/C, depending on attitude and
direction.

One important coefficient is the view factor: integral of the infinitesimal


areas between two surfaces, mesh of discretized surfaces (opening the
door to multi-nodal analysis). The power exchange is due to
characteristic of geometry and material and difference in temperature.
The most complex coefficient is this one, it could be computed using
statistical programs, running a Monte Carlo analysis.
Heat sources
Sources are: from the Sun, reflected high frequency band
by the object nearby (albedo), low frequency band
emitted by the planet, internal sources (more complex to
quantify).

Solar flux
Solar flux depends on the inverse square of the distance
from the Sun.

Albedo
The albedo is the part of the electromagnetic spectrum not absorbed by the planet, but reflected.
It depends on the solar flux, distance, orientation with respect to the source and by a coefficient (a)
that considers the different medium for reflectance. It exists only in the lighted part. If the area
under the S/C flies is in shadow, there is no albedo.

View factor: theta, the higher is the angle, the higher is the orientation with respect to incoming
radiation.
Infrared Radiation
For the IR source, the planet is assumed as a black body. The temperature of it and its emissivity are
the main variable.
For a general case, very fast rotating planet or slow
refers to planets but also asteroids.
The source coming from the radiation of the planet
can be computed using the view factor or with a
coefficient regarding distance from the planet.

Beta angle is the angle between the orbit plane and the Sun direction. This inclination varies in time
because of perturbations and it could be always 90° in the dusk/dawn, while it is 0° in the
noon/midnight polar orbit.

Internal Power
It is very difficult to compute. The difficulty is that every electrical component has an efficiency in
power absorption (efficiency in this case seems to be the actual power used with respect to the total
power input, meaning that a part of the incoming power is irradiated away), leading to a power
dissipation, different from every load to another, that shall not be considered inside the internal
power budget. In a first approximation, all the power distributed shall be considered inside the
internal power budget, with a few exceptions:
- Transmitted power through antennas
- Electronic thrust (ion thruster)
- Transient in rotating parts
- Solar arrays
A strict requirement is to identify the temperature range, different from every component.
One way to solve is to do the multi-nodal analysis where every component is a different body with
irradiation flux, conduction etc.
The basic one is the mono-nodal where a single sphere-body shape with a single conservative range
in temperature is analyzed. The most stringent component for differences in temperature is the
battery (-5 to +25°C).

A cold and a hot case must be studied. Cold typically is the deep space. Hot case may be the Sun.
During the hot case all the internal loads may be switched off, decreasing the internal power
component. Control could be active or passive, active is when electrical components are used for
thermal control (heater, Peltier cells etc.). Usually, the analysis starts with surface exposition,
controlling attitude, power flux coefficients, distances, which kind of loads could be switched off
and then finally the control may be chosen to be active or passive, but only in the last point. Once
obtained the most conservative range of temperature, mono-node analysis is computed.
Optical coefficients are selected during analysis, choosing a surface area of the sphere equal to the
one of the s/c.

Infrared for the planets is modeled with a black body approximation.


Thermal Analysis and Design: STEPS

1) Temperature limits definition


2) Power dissipated evaluation
3) Satellite area, evaluated as equivalent to a sphere
4) Worst case selection
o Hot (albedo + Sun + IR + Max Internal power)
o Cold (IR + Min Internal)
5) Hot case temperature computation for optical properties and area selection:
o Tmax allowed and hot case, computing radiator area (limited by the single node
analysis). Area is a function of maximum temperature, maximum heat transfer,
absorbivity and emissivity coefficients.
o Is area ok with mechanical constrains?
▪ Yes: check power in cold case.
▪ No
• Better radiators coatings
• Evaluate the amount of delta load for next refined phase
• Relax thermal boundary constraints
6) Check the cold case:
o Verify whether the temperature requirements are satisfied with the selected optical
properties and involved area:
▪ Yes: details with more nodes
▪ No:
7) A) Heater sizing
o Minimum temperature based on this area (if too cold: heaters or heat switches)
o Communicate heaters power to the EPS engineers
B) Identify the acceptable thermal limits for both cases

o Start addressing components to be separately treated in the multi-node modelling

Insulation
Insulation is essential for decoupling the internal framework of the components with respect to the
external environment. The internal flux is contained, the external one is reflected. If necessary, the
internal power could be radiated away using a radiator.
The external coating is chosen for characteristic wavelength in which its optical coefficients are
optimized. The emissivity (eps) and absorbivity (alpha) are specified during the sizing.

A primary surface mirror is a thin layer of metal (silver, gold, aluminum, depending on which EM
spectrum shall be reflected) usually deposited by condensation, granting a high reflectivity.
However, since no external protection is present, the thin layer is extremely delicate to external
radiations, dust etc. For this reason, above the thin metallic layer there is a high transmissive
material (such as glass, Teflon, polylmide), becoming a secondary surface mirror.
Multi-Layer-Insulation
MLI is a coating that decouples outside
and insider environment. Between the
external and internal layer, there are a
lot of sheets that do not touch each
other. The decoupling action is due to
the void between the layers. Because of
the vacuum and the high reflectivity, a
very small percentage of transmitted
flux is passing.
What it is important is to increase the surfaces per area coverage of the pieces: with large blankets
there are reduced connection (structural, electrical) per area, so reduced heat flux, while for a
smaller blanket, there are more connections to support the higher number of sheets. Holes must be
created in the blankets since venting is limiting expansion of the inner gas and possible explosion. It
is important to check the holes before launching.
Aging of alpha and eps are important but not covered. However, this effect is due to:

- UV radiation
- Charged particles
- Atomic Oxygen
- Contamination
- Micrometeoroids & debris
- Corrosion (launch site)

Radio-isotope Heating Units


RHU is a small emitting thermal source, used locally for
warming up rotors or wheel on rover. It is based on
plutonium decay; it produces heat when a radioactive
material decays. There are currently no European
manufacturers, but both US and Russia have developed
and used these devices for deep-space missions. Never
propose this as an electric energy supply. It is not created
for electrical energy purposes, only for thermal.

SSEO Lecture 24/5/2022


Today we will see an exercise about thermal control system. In the second half of the lesson, we will
see a presentation about structural systems.

EXERCISE: THERMAL CONTROL SUBSYSTEM


As usual we will see the sizing for the Galileo satellite.
We must know the position of the spacecraft, the size, the orientation and what kind of component
we can use to keep it in a precise range of temperatures.

We must keep payloads in a range of temperatures to make them work properly.

Let’s write the DATA:

Rorbit = 29800 km

i = 56°

Consider spacecraft as a single node, components are subject to the same fluxes of albedo, IR, Sun
etc.… All parts of s/c will emit the same amount of radiation.
We need to consider some ranges of
temperatures and then take the average
temperatures range.
In reality it's difficult to maintain all parts at
the same temperature, so we will need a
more complex analysis to consider other parts
of the satellite as different nodes (account for
parts faced towards earth or not, node hotter
by flux of earth + flux of sun. We can let the
side facing the earth at lower T and so put star
trackers and batteries near to the colder face
instead towards sun).
We can assume all the spacecraft to have a strict range of temperatures, but Galileo is large so it's
better to assume a more relaxed range.

Trange = -25 to +30 °C and taking a preliminary margin of 15 K, we get a range of -10 to +15 °C.

The external sources of radiation are


the sun, the albedo, the planets and
other bodies all around.

Galileo orbits around Earth so we can


take

qsun = 1367.5 W/m2


Albedo is energy emitted by the
Earth in the visible wavelength, it is
due to reflection of solar radiation.

The albedo is higher close to Earth.


We are in MEO orbit so it is lower
than in a Leo.

Considering a = 0.35 we get:

qalb = qsun * a * (Rpl /Rorbit)2 = 21.88


W/m2

We are quite far from Earth so the amount is low (almost negligible).

Assuming:

- ε = 0.85
- σ = 5.67*10-8 W/K4/m2
- T = 255.15 K (-18 °C)

We obtain

qir, pl = 9.34 W/m2

No, considering Galileo size as: 2,5 m x 1,2 m x 1,1 m we can compute the surface:

A1 = 2,5 * 1,3 = 3 m2

A2 = 2,75 m2

A3 = 1,32 m2

Atot = 2*A1 + 2*A2 + 2*A3 = 14,14 m2

We don't consider solar panels, heat exchange between body and panels can be neglected because
the radiation is directed towards space and not towards the body, moreover the conduction is small.
We could consider them in a binodal analysis.
Here some absorbability and
emissivity coefficients are
reported for many coatings and
paints.
For the first step we considered
values in accordance with
aluminised kapton:
Absorbability: 𝛼=0.4 Emissivity:
𝜀=0.6

Now we can compute heat


powers:

How can we find the angles?


We consider 1 phase always hitten by Earth albedo and radiation, ecliptic is inclined at 23.5 degrees
and so the incidence of sun is 32,5 degrees.
Sun power: Qsun = 2174.4 W (using Asun = Atot/3 and 𝜗sun = 32.5°).

Albedo power: Qalb = 26.26 W

Infrared radiation power: Q ir = 11.21 W


Power generated by solar panels is not considered in this example.

We have to consider two scenarios: the hottest and the coldest ones.
Knowing the average requested power Q average = 2000 W, we consider as powers to dissipate:

Qint, max = 1000 W (very conservative)

Qint, min = 600 W

Ok, now let’s consider the hot case:

Qtot, hot = Qsun + Qir + Qalb + Qint, max = 3211.87 W

So Qemitted = Qtot, hot and we can


retrieve Tsc = 285.85 K = 12.7 °C.
We are inside the range so we are
OK!

Now let’s investigate the cold case:


Qtot, cold = Qir + Qemit + Qint, min = 61.27 W

And Tsc = 188.8 K = -84.35 °C Oh no! That’s too cold! We need to use HEATERS.

Assuming Tsc = -10°C = 263.15 K we obtain

Qheaters = 1695.5 W but this value is too high, heaters would require too much power :(

What can we do? Coat differently, play with emissivity, make it lower:
Now we are taking:

Absorbability: 𝛼=0.4 Emissivity:


𝜀=0.35

Ok, now let’s recompute the required heaters power:


Qheaters = 734.38 W we will still need heaters but lower power is required.

Let’s come back to the hot case and check it with the new values of emissivity and absorbability:

Tsc = 327.09 K = 63.94 °C but this is too high! Leo P però oh che sbatta! :’c

We need more emissivity to keep the s/c cold so the solution is to use radiators.

I add radiator panels to make the overall surface bigger and increase emissivity, do not put them
over the body surface.

Unknown: area of radiators.

Using 𝜀rad = 0.88 and Tmax = 15°C = 288.15 K and considering to apply radiators as extra surfaces (A tot
remains the same but Asc = Atot + Arad now) we retrieve:
Arad = 3.71 m2 which is not so big with respect to spacecraft area.
We don't want an extra surface: we can consider radiators just on the surface. We will have parts
of spacecraft that emit with coatings and the other parts emitting through radiators.

How to reduce heaters T and radiators surface? Change coating and for cold case use louvers to
emit less.

To do a more refined analysis we need to split equipment in different nodes.


This is a lander in contact with Moon surface to produce water,
3 nodes: solar panels (conductivity with moon soil), node for electronics and third node for the
radiator. Some radiation of solar panels can heat the electronic node.
STRUCTURE and CONFIGURATION SUBSYSTEM

Simulations with fem models are not required but we are requested to study at least the basis analysis
with handwritten equations. For configuration analysis just use cad programs.

Usually, low frequencies are associated to huge amplitudes. Deformations of large antennas are large
even with static loads so we must assure they don't enter resonance domain.
First of all, we define all scenarios.
Transportation is taken into account because not all the components are produced where the
spacecraft is assembled. Some components of NASA rovers are produced in Italy.
If we land or re-enter, we need to deal with stresses which translate into requirements.

From requirements we approach the development of a configuration.


Move to the design, start with primary structure: material selection, compatibility between
materials. Check for properties, densities, compatibility between materials and so on. Then tests,
which is the shortest phase. If you fail to meet requirements, redesign. Move to optimization of
some components and then final testing.
Also in medium orbit we are not safe.
In LEO we have atmospheric drag, corrosion, we can have deployments of payloads and so
pyrotechnic charges or springs that provoke shocks.
In the launcher manual we can find the dynamic envelope, don't use the static envelope or fairing
space, dynamic envelope is much smaller so use it to size. Use it also to simulate the launch, not
only to size.
Once we planned the packed configuration, we move on into the unfolded configuration. We need
to think on how to move from one configuration to another, don't think too complex, there are
standards to respect.
First of all, place the payloads, then start design the center of mass (locate reaction wheels and
internal elements).
Place solar arrays, check visibility of instruments is not occluded and assure payloads are pointing
in the right direction.
Do refinement of appendages, place antennas to not have impacts with deployment from the
launcher.
Quite big spacecrafts, we have 6 spacecrafts attached to a central cone used as a custom adapter to
work as an interface. Base diameter of the cone has the same diameter as the launcher interface.
Two separate cones are used and then stacked one on top of the other.
Structure design process:

1. design configuration
2. move to structure
3. adjust configuration
4. and so on

Structure depends on geometry, identify primary, secondary and tertiary. Initially identify just
primary structure, simplify it with masses and beams and then refine it adding and identifying
masses and elements.
We can apply Ritz, fem, lumped parameters and other models to our simplified structure.
FEM analysis: don't use cad softwares, don't use SOLIDWORKS because it is made for cad and not
for fem analysis because it's complex. Use Ansys.

In the project we are not requested to size for all the dynamic envelope of launching phase, just a
quasi-static analysis (load applied at low frequency, increasing the frequency we have waves,
shocks, random vibrations and so on). An engineer needs to validate the model for all the spectrum.

Increasing the frequency, the care is shifted to secondary and tertiary structures.

Launch environment loads, we need to validate the system at all stresses and frequencies reported.
All these infos are reported in the launcher manual, otherwise we will have to contact the launcher
provider.
Low frequency sinusoidal are usually due to engine combustion.
Random loads are reported as PSD, we need to extract the g from the plot. We compute the slope
of PSD and the area of the curve to get the g.
We don't need to sum all the g of various frequencies but vary the frequency and apply that specific
g, otherwise you will oversize the spacecraft if I consider 11g as quasi static without consider the
frequency (20g of acceleration is out of scale).

Important only with huge structures


(huge panels).
You need to convert the information and
then do the same procedure of the slide
before.
We need to always apply a margin factor, at most = 2.
We can choose to use just the data on the manuals or proceed with a miles analysis.
Once found the maximum stress on the manual, compute the equivalent load and then the
maximum stress on the section and look for materials and choose geometry.
The outputs of the first part are materials and shapes. The buckling condition depends on material,
size and on constraints that are applied at the ends (pinned pinned, pinned free and so on).

The configuration is reported in the manual.

Critical load is not a function of applied load. We can compute the critical stress and comparing with
maximum stress, check if the margin of safety is greater than zero. If yes we can move on.
Fundamental frequency of the entire structure must be 2 times higher than launcher operating
frequency in order to decouple the effects. If frequency is too low, increase size or add reinforcing
elements.
We can do 1 element fem approximation but it will be inaccurate. Don't use exotic formulas to find
frequencies, use proven theory.

Three slides of criteria for design and material selection are shown but I think quite useless to report
now.

Material selection must take into account strength, stiffness, cost, RPG used, americium or other
reactive materials and a lot of other parameters. Don't design 30kg of radioactive materials
onboard.

You might also like