IEEE Humanoid Report of Future Standards Development
IEEE Humanoid Report of Future Standards Development
net/publication/395796123
CITATIONS READS
0 1,099
16 authors, including:
SEE PROFILE
All content following this page was uploaded by Aaron Prather on 24 September 2025.
Humanoid robots occupy a unique and highly anticipated space in the robotics
landscape. Unlike other automation systems, they promise to operate in
environments designed for humans, performing tasks as diverse as industrial
assembly, healthcare support, and public-facing services. Their appeal is
obvious: a single robotic form that can, in theory, adapt to almost any setting.
Yet, that promise largely remains unfulfilled. The reality is that humanoids face
much greater challenges than most robotic systems, not only technically but
also in how they are evaluated, certified, and trusted. The current standards
framework is not designed for them. Most existing robot standards assume
fixed or statically stable systems and do not consider the dynamic, inherently
unstable nature of a humanoid’s locomotion. Nor do they fully address the
complex ways these machines interact with people, not just physically, but
socially and psychologically.
This report seeks to bridge that gap. It is not a set of final answers, but rather a
framework of findings and recommendations that can guide the next phase of
development for humanoid standards. The analysis draws on three critical
themes—classification, stability, and human-robot interaction—each of which is
deeply interconnected and essential to moving beyond pilot programs toward
widespread deployment.
3 of 95
For Standards Development Organizations (SDOs), the need is even more
pressing. Humanoids are not just another class of mobile robots; they combine
characteristics from nearly every existing category. Without a unified approach
to classification and risk assessment, different SDOs risk producing fragmented
or conflicting requirements that slow adoption and erode public trust.
This report is written not only for engineers and researchers, but also for SDO
members who will turn these ideas into actionable standards. It is designed to
show where existing standards can be extended, where entirely new ones are
required, and how organizations can collaborate.
4 of 95
The Value of This Report
• For innovators and manufacturers, they provide insight into how to design
humanoids that can be certified and deployed in diverse human
environments. The recommendations on classification, stability testing,
and interaction guidelines will help align engineering priorities with future
regulatory expectations.
• For Standards Development Organizations, this report offers a starting
point for coordination. It highlights where ASTM, IEEE, and ISO efforts can
intersect—ASTM leading on test methods, IEEE on performance metrics,
ISO on safety thresholds—and why these must evolve together rather
than in isolation.
Looking Ahead
The chapters that follow dive deeper into each theme, presenting the details
behind these findings and offering concrete recommendations for moving
forward. While the path to full standardization will take time and require close
collaboration among multiple SDOs, the framework is ready to be built. For
those shaping the future of robotics, including engineers, researchers, and
standards professionals, this report serves as both a guide and a call to action.
The decisions made today about classification, stability, and human-robot
interaction will determine whether humanoids remain a niche technology or
become trusted, integrated tools in the spaces where we live and work.
5 of 95
The Humanoid Robot Market
It could be said that humanoids are designed to “displace” rather than “replace”
humans. People can deploy general-purpose humanoid robots to perform
menial tasks while we attend to other, “higher-value” tasks. By contrast,
industrial robots are designed to move faster, move more precisely, and lift
heavier payloads than humans can. Industrial automation has been positioned
for dull, dirty, and dangerous applications, tasks that humans don’t want to do.
One core measure that has made industrial robots so successful is that there is
a clear return on investment (ROI) and a measurable payback period for this
autonomous equipment. For any automation investment, the system must
return greater value than the cost of the solution.
The ROI for humanoid robots remains unclear. Humanoid robots will be able to
perform a variety of tasks, with each task having a different value to the robot
operator. Compare this to an industrial robot deployed into a specific task like
spray painting a car body or assembling a circuit board for an 8-hour shift - this
work is measurable and quantifiable.
6 of 95
The difficulty in determining the value of a humanoid robot comes when you
realize that a dynamically balancing humanoid robot is an order of magnitude
more complex than the state-of-the-art industrial systems or wheeled
autonomous mobile robots (AMRs) in production today. This makes it difficult to
size the market, and results in wildly varying market sizes and growth rates.
Until now, the limits of computing, power, and AI have hindered the realization
of a humanoid robot form factor.
In researching this project, we collected data for over 160 different humanoid
robot models that are being developed around the globe by over 120
companies.[5] China and the Asia region in general lead the world with the
number of models and companies, and the Chinese government is pouring
billions of dollars into its domestic robotics industry. The U.S. and the Americas
are No. 2. Europe, the Middle East, and Africa (EMEA), and the rest of the world
(ROW) are in a distant third place. See Figure 1.
7 of 95
Figure 1 - Humanoid robot models by headquarter region, 2025. (n=169) Source:
ASTM Humanoid project database.
8 of 95
There is currently no believable market-sizing estimate for humanoid robots.
However, all of the research consistently predicts a multibillion-dollar valuation
in the next five to 10 years. The most conservative and best informed is USD 2
billion by 2032 (Interact Analysis).[5]
Mobile robots have also evolved quickly in the past decade, and in the process,
many of the early-to-market companies have either been acquired or gone out
of business as these systems commoditize.
Collaborative robots can work near humans but must work to avoid contact. If
contact occurs, they must be force-limited to ensure humans are unlikely to be
hurt. Humanoids have similar characteristics to collaborative robots, with a key
difference: they can tip over and potentially harm a person nearby, even if not in
direct contact. Future humanoids will be developed to have the ability to touch
and hold humans, e.g., helping an elderly person out of bed. This is currently
beyond the collaborative standard.
The very nature of humanoid design is that these robots are likely to end up
sharing workspaces with humans. The majority of existing models have been
engineered to mimic the physical characteristics of humans. The average height
9 of 95
of the current crop of humanoids is 163 cm (64 in),[5] and the weight is 66 kg (145
lbs).[5] With two exceptions — 1X and Clone – the humanoid robots are all
covered in hard metal alloys, carbon fiber, or hard plastic.
Sources:
• https://www.marketsandmarkets.com/Market-Reports/humanoid-robot-
market-99567653.html
• https://www.marketsandmarkets.com/PressReleases/usa-humanoid-
robot.asp
• https://www.snsinsider.com/reports/humanoid-robot-market-1616
• https://www.cervicornconsulting.com/humanoid-robot-market
• ASTM humanoid robot database
• https://interactanalysis.com/insight/humanoid-robots-large-opportunity-
but-limited-uptake-in-the-short-to-mid-term/
10 of 95
The Unique Risks Humanoids Bring and Why
Standards Must Evolve
11 of 95
• Reliability and Predictability – Unlike traditional robots, humanoids must
adapt to constantly changing human environments. A sensor glitch or
software fault isn’t just a technical failure—it can directly impact human
safety.
These challenges are not insurmountable, but they demand a new level of rigor.
Existing Robotics standards, designed for fixed, wheeled, or cooperative
systems, do not account for the dynamic balance, high-stakes collaboration,
and human-like interaction that humanoids bring. Simply extending current
safety requirements will not be enough.
The chapters that follow tackle these issues directly, offering a structured way
forward for manufacturers, researchers, and Standards Development
Organizations (SDOs):
12 of 95
Bridging the Gap: From Risks to Standards
The risks humanoids introduce are no longer theoretical. As these robots move
out of controlled labs and into warehouses, hospitals, schools, and homes, the
challenges outlined above are already emerging in real deployments. The
question is not whether humanoids can perform the tasks, as they can
increasingly do so, but whether they can do so safely, predictably, and in ways
that humans will accept and trust.
Current standards only partially address this reality. Most were designed for
fixed or wheeled robots operating in either isolated industrial cells or highly
structured service roles. By contrast, they are generalists by design, capable of
working in environments that are not engineered for automation and
interacting directly with untrained users. This mismatch creates standards gaps
in three critical areas:
• Defining what kind of humanoid is being deployed (and for what level of
risk),
• Evaluating its stability and performance in dynamic, unstructured
settings, and
• Guiding human-robot interaction, physical and psychological, in diverse
populations.
Bridging these gaps requires more than simply adding new safety rules; it
demands insight tailored to the specific application. Not every humanoid use
case carries the same level of risk, and not every risk requires an entirely new
standard. Some scenarios can be managed with existing guidance, while others
represent critical barriers to deployment unless addressed through targeted
innovation, validation, or policy updates.
The following use case analysis examines how these risks manifest across key
sectors—manufacturing, healthcare, public services, and home environments—
and assesses where standards are sufficient, where they require adaptation, and
where entirely new frameworks may be necessary.
The table that follows provides this sector-by-sector risk view, setting the stage
for deeper discussions on how classification, stability, and human-robot
interaction standards can close these gaps.
13 of 95
R I S K R AT I O N A L E S FO R S E L E CT H U M A N O I D U S E CA S E
14 of 95
The table highlights where humanoid deployment faces manageable versus
critical risks; however, context is crucial. Below is a brief overview of how these
risks appear across key use cases and what that means for standards
development.
Warehousing Operations
Standards Needs: Leverage ISO 10218, ISO/TS 15066, and IEC 61508; expand
ergonomic guidance under ISO or ASTM for repetitive material handling.
Manufacturing Support
Standards Needs: Extend ISO 10218 and ISO/TS 15066 for collaborative industrial
tasks; develop IEEE guidance for psychosocial impact and adaptive task-sharing
behaviors.
Facility Maintenance
Inspection and minor repair tasks generally carry moderate risks. Collaborative
handoffs—such as tool delivery—highlight the need for reliability and better
functional adaptability in unstructured environments, though psychosocial and
ethical risks remain low.
Standards Needs: Apply ISO 10218 and ISO 13482; create UL or ISO guidance for
adaptive maintenance behaviors and handoff ergonomics.
15 of 95
Customer Service & Reception
Public-facing indoor roles pose increased psychosocial and ethical risks due to
overtrust, unrealistic expectations, and concerns regarding data privacy.
Cybersecurity is essential to protect sensitive interactions, while physical risks
remain modest.
Standards Needs: Build on ISO 13482 and IEEE 7001; expand ISO/IEC 24029 for
public trust in decision-making and UL guidelines for public-facing HRI.
Standards Needs: Integrate IEEE 7010 and NIST IR 8269; develop new IEC
frameworks for robotic surveillance ethics, bias mitigation, and active threat-
response reliability.
Standards Needs: Expand ISO 13482 and NIST CSF; create ASTM protocols for
environmental adaptability and ISO standards for robot-road interaction safety.
Standards Needs: Strengthen ISO 13482 and IEEE 7001; develop UL consumer
safety certifications for residential humanoids.
16 of 95
Elderly & Disability Support
This is one of the highest-stakes use cases. Close physical assistance, health
monitoring, and emergency support demand rigorous safety, reliability, and
ethical oversight. Emotional dependence and privacy concerns are significant,
and ergonomic adaptability is often limited.
Standards Needs: Build on ISO 13482 and IEC 80601-2-77; introduce ISO
standards for human-centered care robotics and ASTM behavioral compliance
metrics.
Standards Needs: Extend ISO 13482, ISO/IEC 29134, and IEEE 7004; create new
ISO/IEEE standards for developmental appropriateness, safe interaction, and
responsible data handling.
17 of 95
The Classification of Humanoid Robots
Introduction
The first question when considering the creation of standards for humanoid
robots is: “What is a humanoid robot?” While seemingly simple when
considered colloquially, there arise distinct discrepancies in any definition when
attempting to precisely define the categorical bounds of robots to which the
name applies. Does a robot need to have two arms and two legs to be a
humanoid? Does it need a head? Does that head need to have sensors just like
humans? How human-like does it need to look? What about size, weight, shape,
communication, mobility, strength, behaviors, identity………?
18 of 95
Innumerable characteristics could be used to define humanoids. This variance
in characteristics means that using any particular one in a definition would
result in overly restrictive categories that are not general enough to be useful to
manufacturers, users, or standards creators. The alternative is to define a
humanoid as a robot with any human-like characteristic at all, regardless of
application, functionality, or any other feature.
While not excluding any robots, this definition is also not useful, and would
require many subclassifications to sort humanoids into useful categories,
leading back into the first problem. As such, this chapter avoids defining
humanoid robots entirely, and instead recommends classifying robots by their
physical structure and capabilities, executive functionality, and use case/
application, just as is done in some current robotics standards.
This approach results in an overall classification system that can be used to sort
all robots, inclusive of humanoid robots. Relative to this report, in the same
effort to fully classify humanoid robots relative to other robotic applications, a
classification for all robots can be presented. This decoupling of the term
“humanoid” from classification will lead to a more useful classification of robots
by their function, capabilities, and use cases. Some of these classes may involve,
but will not wholly depend on, the robot’s appearance or anthropomorphic
structure, which are most often given as the qualitative defining factors of
humanoid robots..
Note: Any use of the term humanoid for the rest of this chapter will refer
broadly to any robot that someone may consider to be even partially
anthropomorphic.
Current Classifications
19 of 95
Prior Classification Efforts
R15.08
20 of 95
Figure 1: Classification of Industrial Mobile Manipulators and applicable
standards from R15.08.
21 of 95
Example of Possible Classifications
As can be seen from prior robotics classification approaches, there are varieties
of features upon which humanoids could be classified (a feature here being any
characteristic of the robot that can be used for differentiation). Prior
classifications are restricted in scope to specific applications or forms of robots.
Looking forward, however, humanoid robots are intended to be general-
purpose robots, and so can fill many roles and come in many forms, and as such,
most potential classifications that apply to any robots can and will apply to
humanoids as well.
Based on the types of features that are often used to qualitatively describe
humanoids, there are many possible ways to classify them, and robots in
general. When creating classifications, the goal is to balance simplicity with
usefulness, often making a single scale of classes (e.g., Class 1 to Class 5).
However, a scalar approach inherently limits the generality and usefulness of
the classification. Without a coherent classification approach for robotics at the
moment, this chapter presents many possible features by which a classification
of robots could be made, focusing on features that also apply to humanoid
robots. The variety of features illustrates the complex considerations that go into
designing and implementing humanoid robots, as well as the challenge
presented by the prospect of creating a clear classification of robots.
22 of 95
TA B L E 1 : E XA M PL E S O F CAT E G O R I E S O F F E ATU R E S BY
WH I C H R O B OT S CA N B E C L A S S I F I E D.
APPLICATION: INTERACTIONS
PHYSICAL CAPABILITIES HIGH-LEVEL BEHAVIORS
AND RISK
These classification approaches are about the features that are inherent to the
design and motion of the robot. The features describe active capabilities that
the robot has, including features of its physical structure and design. The
features will change if the physical characteristics of the robot change,
especially for any robot that can transition between levels.
23 of 95
ENVIRONMENTAL MANIPULATION
LOCOMOTION/MOBILITY
24 of 95
HUMAN INTERACTION CAPABILITY
Behavior Features
25 of 95
DYNAMIC BEHAVIOR GENERATION
This outlines the level of interaction that exists for a person to affect how the
robot will perform its task(s) while the task, or behavior, is underway.
How capable the robot is at adjusting its planned motions based on the payload
that it is carrying. Here, the term “Hard-coded” refers to pre-identified, assumed
inertial parameters of the payload, while “Adaptive” refers to online
identification and/or compensation of measured inertial parameters of the
payload.
APPLIED LEARNING
This captures the extent models learned from data determine the behaviors of
the robot.
26 of 95
Level 2: Underlying behaviors are still traditionally controlled, but high-level
action planning is done by a pre-learned policy.
Level 3: Executive action planning is done by a policy that continues to learn
in operation.
Level 4: End-to-end learning, or at least all processes being effected and
governed by an integrated, continuously learning policy.
LOCOMOTION PLANNING
This capability captures the extent to which a robot can plan and then follow a
path plan in a variety of environments. More complex environments require
more complex sensing, planning, and motion control algorithms.
27 of 95
Level 3: Robot views surroundings, then plans and takes action to quickly and
as safely as possible move to a safe robot state. This specifically addresses the
hazard, e-stop, or other fault that initiated the hazard-response state.
Level 3-1: Robot additionally maintains power enough in joints, even after a
main power source loss, to enact these safety actions.
Level 4: Robot is always operating in a way such that failure of components or
power loss will not endanger nearby humans or other designated protected
objects. (e.g., maintain a pose at all times such that a power-loss collapse will
occur away from a human)
When a robot does not have to interact with humans in an environment, it can
act autonomously. Multiple robots can communicate individually or with a
centralized control. When in a mixed environment, meaning both humans and
robots are present, the risk depends not only on the robots’ control but also on
the background of the present humans. Having only trained humans in the
environment means they know how to behave in a shared space, are aware of
the instructions, and are adept at handling different scenarios.
INTERACTION COMPLEXITY
Refers to the variety of other actors that the robot must be able to interact with,
including both other robots and humans of various amounts of experience.
28 of 95
Level 0: Fully Separated - inline with current industrial standards (e.g.,
traditional industrial robots in fenced zones with light curtains, safety PLCs
triggering shutdowns). Interaction is prevented.
Level 1: Interaction with other robots (e.g., fleets of AMRs in a warehouse).
Level 2: Interaction with trained humans (e.g., collaborative robots on
assembly lines working alongside knowledgeable staff, potentially involving
remote operation).
Level 3: Interaction with untrained humans (e.g., service robots in public
spaces, delivery robots on sidewalks, potential future home assistants).
Level 4: Interaction with Vulnerable Populations (children, seniors, medical
patient, etc.)
INTERACTION TYPE
Refers to the mode of interaction that the robot must be capable of, from digital
communication with a central server, to physical communication and
interaction with humans.
APPLICATION DOMAIN
This refers to the general application the robot will be used for. It implies the
environment, interactions, expected actions, and capabilities the robot must
have, as well as the most common disturbances, safety considerations, and
design requirements that are expected from a robot in this application domain.
This classification breaks with the numeric, performance-based metric
structure.
29 of 95
• Service - The robot is implemented and used by an owner to interface and
interact with the public.
• Public - The robot must autonomously act in an unstructured
environment, and adjust plans and actions based on observations of the
surroundings.
…..
Classification of Humanoids
Humanoid robots, or rather “robots with human features”, are not necessarially
novel in their capabilities or features, as many other robotic systems can share
those details.. Humanoids do, however, indicate a clear expansion of the types of
environments robots will be deployed in. Deployable applications will move
from isolated cages into shared spaces, necessitating layers of new safety and
performance requirements. Currently, robots existing in shared spaces are
typically either small or purely animatronic. Robots capable of the speed and
strength of industrial robots are usually completely isolated in cages or at least
only deployed in industrial environments around trained personnel. Humanoid
robots will potentially have the characteristics of today’s industrial robots while
also existing in shared spaces around untrained people. A classification of
humanoids could thus be used to understand the safety expectations and
performance requirements associated with the robot and its behaviors in these
environments. Consumers want to know what to expect when products come
to market, and manufacturers want to know what kind of capabilities and
functions need to be built into their robots.
30 of 95
useful for capturing the intent behind the creation of the robot, describing why
the robot that is used for an application should be a humanoid rather than a
more classical robot structure. Note again that this is an example of a
classification approach for humanoid robots, and is meant only as a starting
point from which future classification efforts can begin.
For what reason has this robot been made to look like a human?
Conclusion
The effort to develop a classification for humanoids uncovers the larger need for
a robotic classification in general. The classifications presented above are
example strategies for classifying all robots. However, because the classifications
are based on features that humanoid robots also possess, the same
classifications can be used to sort humanoids by capabilities and features. Once
a classification for all robots has been created, then a classification for
humanoids can be created within that context, adding features that are unique
to humanoids, such as the similarity to human form, as discussed above.
Furthermore, in the interest of engaging a broad range of participants in the
standards creation process, if any readers had a strong reaction, opinion,
critique, or idea about these example classifications, they should join the on
follow-up committee that is creating a standard on robot classification after this
report.
Chapter Bibliography
31 of 95
• Prestes, Edson, et al. "Towards a core ontology for robotics and
automation." Robotics and Autonomous Systems 61.11 (2013): 1193-1204.
• Kunze, Lars, Tobias Roehm, and Michael Beetz. "Towards semantic robot
description languages." 2011 IEEE International Conference on Robotics
and Automation. IEEE, 2011.
• Kim, Stephanie, Jacy Reese Anthis, and Sarah Sebo. "A taxonomy of robot
autonomy for human-robot interaction." Proceedings of the 2024 ACM/
IEEE International Conference on Human-Robot Interaction. 2024.
32 of 95
The Role of Stability
For stability, in this chapter, we are considering classic humanoid robots that are
bipedal, powered, and actively-balancing. Such robots differ from other types of
robots for which we have standards by the fact that they can only stand
through powered balance and can change the shape and position of their
support polygon relative to the rest of their structure during normal operation
(i.e., they can take steps). These mobility features provide the agility, range of
possible motions, and responsiveness that give humanoids the highly varied
potential use cases they have. However, these same capabilities are also the
main source of hazards caused by humanoid-type robots. At all times during
operation, a loss of power or too large an unexpected disturbance could result in
the robot toppling over. This could harm the robot itself, any parts of its
environment, and any nearby people. As a result of this risk, stability-related
safety concerns are a main barrier to the adoption of humanoids in any space
shared with humans.
The proper way to address these safety concerns would be through a risk
assessment that identifies all potential hazards and the appropriate safety
measures that must be taken in response to those hazards. Ideally, technology-
and application-specific standards would be available to guide readers
(standards developers, policy makers, robot researchers, etc.) through the risk
assessment process. Currently, however, there is a lack of standards that provide
the tools needed to quantify the level of risk or validate the effectiveness of
safety functions on actively-balancing robots. The entire burden of proving that
the robots are safe enough is thus on the manufacturer of each robot. Creating
standards for evaluating stability will ease this burden on manufacturers. Such
standards will be used to measure and prove stability during key applications,
allowing customers to have confidence in the safety of the robot, as well as
allowing manufacturers to measure performance on tasks and prove utility for
the applications that customers want to use them for. Creating practically
useful standards is the necessary first step towards creating certifications for
humanoid robots.
33 of 95
An example first step is to address the fact that standards currently do not have
a cohesive definition of stability. The word itself means slightly different things
in different contexts. The technical engineering definition of a system state
settling to a defined equilibrium, the colloquial sense of a system able to act
and interact with the environment without harming anything, and the specific
humanoid sense of being able to walk and balance without falling over, or at
least managing the risk of toppling in the same way humans do. In addition to
the varying types of stability, there are two aspects of stability to be considered:
safety and performance. Safety considers the avoidance of harm to people in
the vicinity. This can be considered a minimum requirement and more of a
pass/fail behavior. Performance considers evaluating capabilities to both
perform intended tasks and respond to changing conditions, which can also
include measuring potential risks posed to the robot itself and the environment
as a desirable byproduct.
Despite not being inherently stable, humanoid robots are expected to be able
to perform a wide variety of tasks while also being robust to a wide variety of
environments and external disturbances. On the other hand, the complex
structure and walking capability of these robots give them many options in how
to respond to events and disturbances (e.g., navigating variable terrains or
stepping to avoid falling rather than just leaning to balance). Current robots
have displayed increasingly impressive capabilities as they approach
applications that live up to the high expectations of humanoids. However, the
variation in how a robot might respond to certain situations, as well as the
variety of control approaches that drive such behaviors, makes evaluating the
stability of humanoid robots more challenging. Additionally, because the
functioning of the robot is so dependent on the performance of the control
algorithm, if a specific combination of task, robot, environment, and inputs has
not been seen and tested before, any standardized validation space would be
very broad, at the limits of practicability.
34 of 95
of stability-related standards that will be detailed. Within that roadmap, Section
3 discusses the need for quantifiable performance metrics and methods to aid
in understanding humanoid robot motions, and Section 4 discusses how the
stability standards are closely related to the development of safety standards as
well. Section 5 provides concrete recommendations for Standards Development
Organizations (SDOs), and Section 6 concludes the chapter.
For robotics, considerations of safety begin with safety standards for machinery.
Safety for machinery is implemented in two steps: risk assessment and risk
mitigation. Risk assessment for machines is guided by ISO 12100, with other
standards covering individual classes of risks, such as those derived from
electrical hazards (IEC 60204-1). The fundamental hazards for machinery in ISO
12100 are extended in product-specific safety standards like industrial robots
(ISO 10218-1 and -2, recently revised), personal care robots (ISO 13482), and
others. Risk mitigation is often done in one of two ways: either avoiding a
particular risk entirely through safety requirements or actively avoiding it during
operation through safety functions. Some safety requirements inside the
product-specific safety standards point to technology-specific or narrow
standards, for instance, the ISO 13850 on Emergency Stop or the ISO 13855 on
safe distances from static and moving hazards. Product-specific standards
address vertically the risk assessment for robots, the selection of equipment,
both electrical and mechanical, the properties of controls and power sources,
the mandatory safety-related stops, mechanical and non-mechanical physical
or virtual safety guarding to protect people, motion limits such as avoiding
kinematic singularities or speed and separation monitoring, labelling,
instructions, and more. When these protective measures are implemented with
devices dedicated to the reduction of risks via the monitoring and active control
of reaching or maintaining safe states, then Functional Safety is implemented
following IEC 61508 (or its derivation for machinery ISO 13849). Altogether, these
standards establish general requirements for the safety of machinery, and
robots specifically.
Beyond these general safety standards, ISO and other SDOs have produced
several safety standards that address more subcategories of robots. However,
these standards make an unwritten assumption that the base of the robots
being considered is either fixed or has a statically stable base. Due to this, none
35 of 95
of these specialized safety standards apply to humanoids. For humanoids, even
with all the joints braked and powered off, which is a very conventional safe
state for fixed manipulators, the robot is not necessarily in a safe state. In
general, it must be assumed that a locked pose is unstable. This potential
instability, present in normal conditions, means that the robot can become
unbalanced and fall over even with high-performing controllers. Current
standards do not adequately address stability requirements, and so a dedicated
series of standards is now recognized as a market need.
The only standard that does not exclude robots with actively controlled stability,
including legged robots, from its scope is ANSI/RIA R15.08-1. This US national
standard provides safety standards for industrial mobile robots (IMRs), including
their navigation and control. R15.08-1 is uniquely silent concerning the mobility
principle of robots, and legged robots with manipulators could fit in the scope.
The standard defines “Type C IMRs” when manipulators are integrated with a
generic mobile base, which is the mobility representation closest to humanoids.
While R15.08-1 states that “The IMR shall maintain stable operation during
travel” and includes some methods to validate this stability condition, it only
considers that instability could be caused by a payload that is too heavy and/or
held too far away from the base of the robot, causing the IMR to tip over. These
requirements are derived from the risks analyzed for single manipulators
mounted on top of moving bases.
36 of 95
Humanoids are far more complex in the aspects of dynamic stability and the
use of limbs for and during balance. A dedicated family of standards that
address stability for actively-balancing and legged robots would greatly
contribute to the clarity of specifications and requirements about manipulation
and balance.
In the domain of Service Robotics, IEC 63310 on Active Assisted Living Robots
states, “AAL robots with assisted mobility functions other than wearable or
wheeled ones should have the ability to finish their intended tasks.” ISO 13482
on Personal Care Robots simply states that robots should have sufficient
stability: “The personal care robot shall be designed to minimize mechanical
instability (e.g. overturning, falling, or excessive leaning when in motion) due to
failure or reasonable foreseeable misuse.” It references ISO 7176-1 and ISO 7176-2,
which are about static and dynamic stability for wheelchairs. Again, these
standards only determine that the device should not be destabilized while in
normal use. It does not address robots with dynamic self-balancing or the ability
to alter their base of support during operation.
Even further from industrial and service robotics safety standards, a potential
use of humanoid robots appears in the Technical Report ISO 4448-1 about
Public-area Mobile Robots (PMRs). This report is form-agnostic and explicitly
includes legged robots, as well as the more common wheeled mobile robots,
and includes safety considerations for the application of automated pick-up and
drop-off (PUDO) of goods and people at the interface between roadways and
sidewalks (the “kerb”), in addition to the behavioral requirements for PMRs
operating in pedestrian spaces, including sidewalks and roadways. The core
goal of this report is to provide a framework for municipal, provincial/state, and
federal governments to establish regulations surrounding the operation and
use of automated systems performing PUDO tasks. Interestingly, the report
introduces the use of a “shy distance” which dictates how far away a PMR
should attempt to stay at a minimum from any “inattentive, uninvolved,
unprotected, and untrained bystanders”. For comparison, all existing mobile
robot standards do not dictate any predetermined separation distance, but refer
to a more sophisticated computation of sufficient space to come to a safe stop
or prevent hazardous collisions. Additionally, the problems of exposure of the
general population and outdoor conditions are very challenging.
37 of 95
Overall, most standards require that robots, whether mobile manipulators or
personal care robots, must not cause undue hazards due to instability during
normal operations, nor should maintaining stability increase the risks of already
existing hazards. First and foremost, “preventing hazards” due to loss of stability
is not an option for actively controlled robots. By the very nature of actively
controlled stability, hazards are not eliminated. Only risk reduction and the
evaluation of residual risks of instability are eligible considerations. Second, and
challenging from both technical and regulatory standpoints, balance-related
actions are part of the safety system in the scope of functional safety. A safety
function is the capability to detect a particular hazardous effect (i.e., a quantity)
through sensing of internal states (e.g., a fault) or external phenomena (e.g., an
obstacle), then enact corrective actions that successfully either avoid any
hazards or reduce the consequences of the hazardous event to acceptable
levels. Conventionally, the primary approaches to establishing confidence that
safety-related control actions are sufficient to reduce risks are by measuring
their ability to control or avoid failures according to standardized levels of
residual failures. This metric is established in IEC 61508 with Safety Integrity
Levels (SIL), and inherited with variants by the derived sector-specific functional
safety standards like ISO 13849 for machinery with Performance Levels (PL).
Both SIL and PL measures are based on intervals of Probability of Dangerous
Failures per Hour that are suitable for random failures and are paired to a
corresponding level of control or avoidance of systematic failures (Systematic
Capability). These levels are used to create requirements for designing
increasingly robust safety functions intended to reduce risks that are
proportional to the frequency and intensity of the associated hazards. The
higher the risk, the more demanding the protective function must be.
However, these functional safety standards are progressively not fully adequate
when hardware and software failures are intertwined. This is the case of actively
controlled stability and perception of the surrounding environment that is
instrumental in achieving successful locomotion and navigation.
For example, typical top-risk safety functions for industrial robots are mandated
in ISO 10218 to achieve a PL d range of residual dangerous failures, specifying
the required range of those probabilities depending on the hardware
architecture (structure category 2 or 3), when expressed per ISO 13849 - or to
reach SIL 2 with a hardware fault tolerance of 1, when expressed in accordance
to IEC 61508 or IEC 62061. This approach is deeply understood and consolidated
for relatively simple functions. However, the dynamically balanced nature of
38 of 95
humanoid motion means that any safety function related to stability will have
varying results and a chance of success depending on both the state of the
robot and any external factors that might be present. Recovering, or just
maintaining balance, requires a complicated series of sensing, internal
modelling, motion planning, and control capabilities that are harder to
characterize as SIL or PL safety performance levels. Additionally, there are
currently no test methods or procedures to measure performance and validate
the capability of balance safety functions. To have full confidence in the
performance of stability safety functions, more than just the standard SIL and
PL process is needed.
Hazards related to instability are, in fact, not well expressed in terms of rates of
failures per hour, but rather as events that depend on the execution of actual
behaviors and the conditions of the environment. Hardware and software
components implementing safety functions can be characterized by
distributions of failures with time-based rates, but aggregated behaviors are
better expressed in terms of the rate of success, and risks may still arise even
without system failures. This scenario is common in the domain of autonomous
vehicles, where methodology for the analysis of functional insufficiencies is a
preliminary step to instruct safety functions to address the ultimate execution
of a safety action (see ISO 21448, in combination with the functional safety in
ISO 26262). Similarly, assessing the safety of complex systems like humanoid
robots requires a comparable evolution in approach, focusing not just on
preventing/detecting malfunctions, but on ensuring an acceptable level of
safety of their intended actions within unpredictable environments.
39 of 95
In conclusion, current standards created requirements for safety with the
underlying application domains and their risks in mind (collaborative
applications, service robots, AGVs, AMRs, autonomous vehicles, etc.). Those
standards cannot be applied directly to humanoids, where the problem of
stability affects all safety-critical functions and the degree of successful
resolution to safe states. Dedicated risk assessment considerations and analysis
of functional insufficiencies are not common in the domain of robotics but are
concluded to be necessary for complex dynamic behaviors and highly variable
autonomous functions. On top of this, even the ability of humanoids to attain
balance and/or recovery implemented as safety-related parts of control systems,
there is a lack of standardized test methods to validate such safety capabilities.
For example, ISO 9283 focuses on establishing the accuracy and repeatability of
industrial robots applied to manipulation tasks. These tests are mostly applied
to fixed-base robot arms, with accuracies on the order of millimeters.
Humanoids can also perform manipulation tasks; however, this particular
standard is limited in that it only considers open-loop performance, not
40 of 95
covering the integration of the large number of sensing systems that
humanoids have. It also only focuses on end-effector positioning, not
addressing robots without a fixed base. This makes the test methods not fully
applicable as is, because unmeasured error in the positioning of the base will
interfere with manipulator performance. Similarly, ISO 18646 is a series of
performance measurements and test methods for service robots. The tests do
not consider stability beyond static loads offset from the COM or inertia from
dynamic motions, which cause tip over, with relevant work undertaken in the
ISO/DIS 18646-5 project (which is in final development stages at the date of
writing). Overall, for all robotics test methods that only consider fixed-base
robots, while the method itself may be able to be accomplished by the
humanoid, balance and stability performance will affect the overall results of all
of those test methods and must be considered.
As inspiration for new standard test methods, there have also been several
academic projects dedicated to evaluating humanoid robots in the past. There
are many papers that discuss humanoid robot benchmarking, though most
focus on finite aspects of humanoid robots such as WHole-Body-Control
approaches, learned control policies, specific tasks, or individual robots (as a
non-exhaustive list). Of note is the EUROBench Project, which was a 5-year,
multi-institution project that set out to create a unified benchmarking
framework for robotic systems in Europe. Various sub-projects generated test
41 of 95
methods for many aspects of robotics applicable to humanoids, from whole-
body manipulation to balance. Of all the research mentioned here, however,
none has yet to make it into any standards as metrics, test methods, or
practices. These projects are starting points for future committees to begin
forming standards for evaluating human performance.
To address the challenges that stability presents for the adoption of humanoid
technology, we recommend an approach that:
Path Forward
Across all prior research and standards, we have found that despite balance/
stability control being done on all humanoid robots, balance/stability
measurement has not been published in any standards yet, though several
programs for standard development are ongoing. Given the body of knowledge,
however, stability criteria should become part of accessible standards with a
high potential for benefit to the humanoid industry. Despite this clear goal,
implementing a codified set of criteria in technical standards is still not simple.
42 of 95
Therefore, we recommend a two-part process for creating new standards
focused on humanoid stability performance. The first is on measuring and
quantifying stability. This first effort is focused on creating test methods to test
stability performance in a variety of tasks and conditions. These tests will
become the foundation for overall humanoid performance testing in the future.
This effort will also focus on creating usable stability metrics. The second part is
on developing safety standards for humanoids. This will involve building upon
the current approaches to robot safety to extend to the particular hazards
presented by humanoids. The design of safety standards will come from two
directions. One will build on the test method and metric creation, where each
test method can establish safety thresholds for each task, and the other will be a
new approach to standards, looking at integrating safety requirements into the
controller itself at the design stage.
TA B L E 1 : A S U M M A RY O F CO N T R I B UT I O N S N E E D E D I N
N EW STA N DA R D S , A S WE L L A S R E CO M M E N DAT I O N S
F O R A S S O C I AT E D N E XT ST E P S TO D EVE LO P T H O S E
STA N DA R D S
43 of 95
NEEDED STANDARDS RECOMMENDATIONS
44 of 95
safe implementation of humanoid robots in the highest complexity, but also
highest value, areas (e.g., humanoids walking in complex, crowded spaces
alongside humans, able to collaborate, etc.).
Table 2 below lists a sample set of test methods that could be developed for a
humanoid robot that would be implemented in simple load pick-up, move, and
place tasks in a standard industrial environment.
45 of 95
TA B L E 2: E XA M PL E T E ST METHODS TO EVA LUAT E
STA B I L I TY I N H U M A N O I D R O B OT S .
Robot is intentionally
destabilized or falls, and its Recovery time, success rate of
FALL RECOVERY TEST ability to recover to a safe recovery, impact forces during
state (e.g., upright, curled) is fall, final safe pose.
measured.
46 of 95
When creating these validation methods, the procedures should be flexible
enough to allow robots of different motion capabilities and morphologies. The
tests should be practical and rigorous, easy enough to set up and conduct so
that they can be widely used, but still difficult enough to accurately differentiate
robots of various capabilities.
From each test, numerous evaluation data points can be taken, such as success
rate, speed of execution, accuracy, etc. Each of the individual example tests that
could be created can be considered to be a basic capability. Since a humanoid is
a general-purpose robot, more complex behaviors can also be tested by
combining multiple of the basic capability tests outlined in Table 2. Such
combination tests should be performed if the application requires specific use
cases that can be evaluated.
One limitation of this approach tied to test method performance is that the
total number of possible permutations of tests is very high, more than the
number of test methods that can realistically be written. This is a limitation of all
demonstrated behavior-based measurements of performance. Especially for
humanoid robots, which are performing multiple tasks at once, it has not been
explicitly proven that just because the robot can perform the pre-defined test
methods well, it can also perform well on the tasks in a real work environment.
47 of 95
The correlation between performance on test methods and performance in real
work is still meaningful, and more research should be done to understand the
uncertainty in this relationship.
Control-Based Metrics
While test method performance is the first and most straightforward way of
evaluating humanoid capability, completion of those tasks requires a
combination of control, modelling, planning, and behavior generation
functions. Walking stability is an inherent part of that system, but it can be hard
to isolate the specific stability performance when only evaluating by task
performance. As such, targeted metrics for stability performance need to be
identified or produced that can be implemented during tests. To be
standardized, such metrics should be repeatable, robust, and accurate enough
without requiring too expensive equipment to measure.
48 of 95
for more dynamic actions such as walking. Many state-of-the-art humanoid
robot control approaches already incorporate some amount of prediction, such
as with optimal control or model predictive control (MPC) based approaches. To
fully evaluate stability performance, metrics will need to incorporate predictions
of possible future dynamic states and what is acceptable as “stable enough”.
Beyond knowing when a robot is unstable, there is a need to know how stable a
robot is, such that other motions and interactions can be planned accordingly.
For industrial and other applications in human spaces, reliability and safety are
primary concerns for which measures of stability and an understanding of how
stability determines behaviors are needed. Higher stability and more robust
motions may be required, or desired in some situations, while lower stability, but
higher performance actions may be acceptable in others. While a standard
measure of stability is not needed to design the structure or capability of a
robot, it would be extremely beneficial for risk assessment and proper
implementation planning. Additionally, just like humans walking, humanoid
robots in real-world spaces would be subject to disturbances, interruptions, and
other unpredictable events. As such, the goal of these metrics is to have a way
of continually evaluating the performance of the robot, not only during
validation tests, but also during real-world applications as a monitor of
performance.
49 of 95
Finally, an additional need for stability metrics in the creation of future
standards is for realizing safety requirements in non-standard tasks. The
simplest way to prove that a robot can safely perform a task is to establish a
standard test method evaluating the performance of that task, and then
conduct trials validating the required capability. However, it is impossible to test
every action that it is desired for humanoids to accomplish, so there must be
measures of stability that can be applied to any generalized task. Showing
performance with these metrics will then allow for meeting the guarantees of
stability and performance that controllers must deliver, as required by safety
standards for various classes of robots and applications.
50 of 95
careful consideration of the scope and degree of interpretation of standardized
requirements. In general, safety technical standards must not be used à la carte.
Potential options to consider are the (obvious) complete physical separation
between robot space and human spaces, and the residual exchange of energy
in case of physical contact. In both cases, the impact of the active control of
stability has a direct effect on the size of separation to either maintain from
humans or to evaluate when defining a safe state based on residual contacts.
51 of 95
For SSM-like approaches, mobility and active control of stability to remain
upright will require the introduction of additional space to create a collaborative
non-contact zone. If a human approaches this safe region around the robot,
then the robot could, for instance, enter a stance and control mode that is
practically motionless. For a robot with two legs, that would mean with both
feet on the ground and not taking any steps or at least not any step that could
generate large or unpredictable displacement. Common sense would indicate
that such a robot pose should be such that if power were to be lost (or an e-stop
hit), the robot would fall away from the human, and not towards it. In this basic,
yet robust, implementation of the SSM criteria, the exact definition of the size of
the stable safe region is determined as a design solution (not from the
standard’s requirement). For instance, it could be a region with a radius
matching the height of the robot, and then shaped based on the current
centroidal velocity such that the region represents the possible space a robot
could fall into if it were to topple in any direction at any moment. Note that we
are not necessarily recommending that this is the exact adoption of a standard,
nor should it be the standardized requirements. It is instead an example of a
solution to illustrate what type of framework and requirements (i.e., the targets
for such solutions) are needed in safety standards dedicated to humanoids.
For PFL-like approaches, it’s currently completely unknown what the potential
configurations or limits are for residual contacts within the limits of pain onset
that are specified for collaborative applications. ISO/TS 15066 (resp. ISO 10218-x:
2025) is, in fact, completely dedicated to single manipulators, while the complex
effects of energy (or power) flux density, effect of exposed contact surfaces, and
distribution of forces are unknown for humanoids. Standard methodologies for
recommending the settings of limits and their verifications are heavily restricted
to the hypothesis proper of manipulators. Still, the principle of quantifying the
effects of physical interaction illustrated in the PFL mode of ISO/TS 15066 is
becoming a solid foundation for the extension of such standardized limits to
general machinery when contacts are part of the intended application.
Important to be reminded, the PFL mode involves a purposeful collaboration
between humans and robots in the same shared space. Accidental contacts
have a distinct risk profile that can be studied by the specific workflow of such
applications. An extension to occasional contacts for non-task-related
operations (e.g., random contacts with bystanders) is, in general, out of the
scope. It is indeed tempting to consider collaborative limits generic because a
52 of 95
mobile robot can be anywhere in a working space. However, this shortcoming
would bypass a due risk assessment with obvious consequences in missing
proper estimation and evaluation of risks.
To create new standards for safety, especially as it is related to stability, the most
straightforward approach would be to establish performance thresholds on
tasks that the robot needs to be able to perform, and then require robots to
demonstrate that level of capability on those tasks through validation test
methods. The test methods and metrics discussed in section 3 could be used
for this purpose, with the exact threshold that must be met for any particular
metric-task pair determined by the user and application. The set of required
tasks and the level of the safety thresholds can be altered for different
applications and environments. For example, a humanoid designed to go out
into the public may require a higher level of stability performance and
robustness to disturbances, but a lower level of manipulation accuracy,
compared to a robot that is to be used in an industrial environment with only
infrequent interaction with fully trained professionals.
53 of 95
any nearby humans in the case of a fall, the capability can be shown via a
disturbance-based test method that represents the tasks in question, that a
robot can fall in a predetermined direction if a fall occurs. If, during operation, a
robot enters a state or environment for which it does not have as rigorously
proven stability, then a different, fallback safe state could be utilized. While an
approach like this is useful and would be an accessible first step to establishing
stability and safety requirements, the lack of generalizability of safety to any
particular task is a limitation of this approach. More sophisticated safety
requirements will need to be defined that can handle any arbitrary task a robot
might need to perform.
As a result, we can state that there are two parts to meeting the safety
requirements to be set out in future standards: first is demonstrating the robot’s
capability to safely perform a task (as measured by specific metrics and test
methods), second is implementing guarantees that the robot will perform the
task in a real-world implementation, subject to any foreseeable disturbances.
Such guarantees could look like stability constraints in the controller, high-
accuracy path tracking, or predictive control that models the robot throughout
an entire dynamic motion. Several such approaches have been presented in
academic research, but to be implemented in practice, new standards are
needed with the details and procedures necessary to validate such controllers.
Additionally, if meeting these safety requirements involves state estimation of
an internal model or external sensing of the environment, then validation of
sensor and modelling reliability will also need to be presented.
These details and procedures are needed to guarantee the stability, and thus
safety, of a bipedal robot within foreseeable and reasonable circumstances.
However, it is recognized that 100% guaranteed stability is not always possible.
Just as humans sometimes will fall over due to unforeseen circumstances or too
54 of 95
large of disturbance, a bipedal robot may fall over as well. This means that
standards for fault handling, fall response, and handling of all other errors must
also be presented.
Posed in more procedural terms, the tried and true approach to managing risks
inherent to machines in human spaces is to institute appropriate controls that
reduce the risk. However, the behaviors of balance and stability that bipeds
exhibit are produced by increasingly complex algorithms (optimal control,
multi-layer MPC, learned policies) that make it harder to understand what risk
the robot poses from possible instability, and therefore harder to understand
what the appropriate controls are. As the controllers that govern robot behavior
increase in mathematical and algorithmic complexity, the standards that are
used to measure and evaluate these controllers will need to be developed to
match the capability. This does not replace the need and use for control-
independent, task-performance test methods, but should be in addition.
55 of 95
go beyond behavior generation constraints and safety control functions, to
include physical design specifications such as soft contact points or low centers
of gravity and modular safety devices such as air bags or overhead gantries in
hazardous areas.
56 of 95
intended functionality extends into all aspects of robotics, as well as the
standards that shape them. Therefore, this group will need to successfully work
with other groups and SDOs, addressing all other aspects of robotics as well.
At the time of the publication of this report, a new project from ISO/AWI for
developing a safety standard that will cover industrial bipedal robots has begun:
ISO/AWI 25785 Part 1: Safety requirements for industrial mobile robots with
actively controlled stability (legged, wheeled, or other forms of locomotion)--
Part 1: Robots. This is the first, and a significant, step towards creating updated
safety standards for bipedal robots.
Conclusion
57 of 95
safety functions and safety behaviors that would include several contributions
to risk reduction. Safety standards will benefit from testing standards, as most
safety behaviors will require quantifiable conditions to establish minimum
acceptance thresholds of performance. Additionally, as new subsystem
performance requirements are defined (balance, state estimation, navigation,
multi-objective coordination), new standards will need to be created that can
verify the capability and reliability of these subsystems.
Humanoid robots, like all machines, will never be perfectly risk-free. However,
with the proper safety controls implemented, there can still be a safe and
effective implementation of the technology across many intended application
domains. Creating the standards, as described above, will provide the tools and
common understanding necessary to achieve these successes. Finally, the same
standards created with humanoid robots in mind will apply to a wide variety of
robots that share the targeted features with humanoids. Standard efforts
resulting from this report will improve the implementation and applicability of
many robotics technologies throughout industry.
Sources:
https://www.marketsandmarkets.com/Market-Reports/humanoid-robot-
market-99567653.html
https://www.marketsandmarkets.com/PressReleases/usa-humanoid-robot.asp
https://www.snsinsider.com/reports/humanoid-robot-market-1616
https://www.cervicornconsulting.com/humanoid-robot-market
https://interactanalysis.com/insight/humanoid-robots-large-opportunity-but-
limited-uptake-in-the-short-to-mid-term/
58 of 95
Human-Robot Interaction for
Humanoids
Authors: Francisco Andrade Chavez, Benjamin Beiter, Marie Charbonneau,
Brandon J. DeHart, Greta Hilburn, Jeremy Marvel, Kartik Sachdev, and Dieter
Volpert
How does a humanoid robot designer know what details to include or change,
and how much? There is much research on these topics, and more needs to be
done as humanoids (as highly interactive robots working in human spaces) are
59 of 95
used more broadly. To start to understand the various aspects of this problem,
however, we can lean on the expertise of those in the field and the expectations
of prospective users, as described in the next section.
To facilitate a deeper look into this topic, a qualitative survey was developed and
shared with stakeholders across many different fields, both within HRI itself and
beyond. The intent of distributing this survey widely was to collect responses
from experts on the topic, experts on related topics, and laypeople to generate
an indicative cross-section of opinions on which to base the discussions and
recommendations found later in this chapter.
At the start of the survey, respondents were informed that its purpose was to
help us determine what aspects of HRI might impact the development and
application of standards for humanoid robots. The survey consisted of the
following 12 questions, divided into three thematic sections:
Communication Behaviors
60 of 95
• How should humanoid robots respond to accidental/intentional physical
contact with people to ensure both safety and utility?
Over a few months, we collected fifty responses to this survey, with responses to
each question from each respondent ranging from a few words to several
paragraphs. Based on the inductive method of qualitative content analysis
process introduced by Elo and Kyngas in 2008, we have summarized and
distilled the responses to inform the discussions and recommendations below.
This chapter has also been informed by a set of 12 user interview participant
responses collected regarding the potential integration of robots into daily life,
as part of a study conducted in the USA by one of our team members. This
consisted of an in-depth analysis of 12 user interviews on humanoid robot
integration that included a diverse range of participants (all non-engineers),
which reveals a complexity of expectations and ethical considerations that are
critical for future development.
61 of 95
The findings consistently underscore the importance of emotional intelligence
and trust in human-robot interactions, with users emphasizing robots' need to
comprehend and respond to human emotions, especially when engaging with
children, older adults, and other vulnerable populations. The user interview
responses regarding accessibility, data privacy, and potential social implications
highlighted an urgent demand for ethical design and equitable deployment.
This user interview feedback informs standards by highlighting the need for
humanoid robots to not only avoid physical harm but also prevent emotional
distress, ensure data privacy, and mitigate potential social disruptions. It also
outlines that the end user will be expecting: (i) a focus on accessibility and
inclusivity, (ii) emotional intelligence and understanding to be prioritized, and
(iii) seamless human-robot communication.
62 of 95
• Accessibility and inclusivity: the robot must be capable of
understanding and responding to users with communication
impairments, such as those caused by strokes or other health conditions,
as well asl children who have suffered trauma with trust issues.
• Emotional intelligence: the robot should be able to interpret subtle cues,
including minimal facial expressions (micro-expressions) and vocal tones,
to understand the user's emotional state and needs. Examples given: fear,
pain, confusion, and trust.
In essence, this qualitative data argues that effective safety and standards for
humanoid robots cannot be developed in a vacuum. They must be deeply
informed by real human needs, fears, and aspirations, ensuring that technology
serves humanity in an inclusive, empathetic, and ultimately, safe manner.
The remainder of this chapter consists of 3 main sections, each dedicated to one
of the thematic topics included in our survey within the field of HRI as it relates
to humanoid robots, followed by a section outlining the lessons learned via the
user interviews mentioned above, and concluding with a section outlining our
overall recommendations for standard makers and robot designers, based on
both internal expertise and the results of the survey and user interviews.
63 of 95
Robot Appearance and Communication Methods
While both factors can be used to convey necessary information quickly and
intuitively, mismatches between the impressions given by appearance and the
real capabilities of the robot can cause safety concerns. People who feel too
familiar with the robot, or think it has a higher cognitive capacity than it does,
will be more likely to not show proper caution and to not follow proper
procedures to keep themselves and their surroundings safe. Such concerns can
be mitigated by proper training, signage, and other controls, but the
appearance, actions, and direct communication of the robot with the people
around it will continually reinforce certain impressions, so designers should be
deliberate about conveying accurate information.
64 of 95
the general state of the robot. Finally, the third component conveys details
about the environment a robot is working in, the obstacles it needs to work
around, and tasks that the robot is performing.
With one exception, all survey respondents agreed that appearance does have
an impact on communication. While this is potentially helpful, it can also lead to
miscommunications when humans make false assumptions about the
capabilities of a robot based on its appearance. Many noted that the more
human-like a robot looks, the more people will expect it to be able to perform
actions (both in mobility and communication) equal to what a human can do.
Based on user feedback from the user interviews, the majority of individuals
who were interviewed agreed that a robot's appearance should be determined
by the robot's intended function.
When evaluating the effect of appearance, the most common response was
that, currently, the anthropomorphic appearance of humanoids across the
board causes people to overestimate the robot’s true capabilities. People
assume it has mobility beyond what is possible at the moment, a level of
65 of 95
sensing and awareness beyond reality, higher intelligence, reasoning, and
emotional understanding than it does, higher reliability in task performance
than currently has been proven, and a higher level of safety in the robot’s
immediate vicinity than is guaranteed. A less common, but still possible
problem is that a less human-looking robot could lead to users underestimating
what the robot is capable of, though this poses less of a safety concern.
User feedback indicates a desire for humanoid robots with the ability to be
personalized by user preference and adapted to a diverse range of users by age
and capabilities. The majority of the groups that expressed a desire for a more
detailed appearance in specific aspects of the robot were young children and
older adults. Facial features that could be expressive and could emote directly
were favored for both groups. Options for both digital and more humanlike
features were considered best for the desired features in terms of interaction
based on individual needs.
66 of 95
that had features that would signal their abilities to users. These specific
features would be indicators included in its design deliberately to visually
display its purpose, functionality, and capabilities to all humans who see it.
Expectations of the robots' mobility and physical gestures were also emotionally
driven. If the robot was intended for basic domestic assistance, individuals
wanted to have the ability to personalize the robot’s appearance to match the
environment, and have the option to “put it away”. Other interview respondents
wanted a high-performing surrogate human companion robot that served as a
caregiver with the ability to interact and move with human precision.
This also implies that as a robot is continually developed and its capabilities are
improved, the appearance should also change, although implementing this is
not a simple task. Design and movement remain critical in managing
expectations throughout the development of a robot.
Any feature of the robot can have these effects, including size, color, shape,
voice, mannerisms, motion style, similarity to a human, etc. Many survey
responses associated certain types of features with whether they would evoke
positive or negative reactions in users, and what they believe those reactions
might be, as summarized in Table X. While these features and reactions seem to
be common, the magnitude of the reactions, as well as which features will lead
67 of 95
to which reactions varies greatly from person to person. As with prior
expectations, how an individual reacts is heavily dependent on background,
culture, experience, and many other factors.
TA B L E X: S U M M A RY O F A F F E CT IVE R E ACT I O N S TO T H E
R O B OT A N D WH I C H F E ATU R E S CO U L D B E T H E CAU S E .
POSITIVE NEGATIVE
While the appearance and other features of the robot may primarily shape the
first impression a user has of the robot, other forms of voluntary communication
also shape the human-robot interaction. Based on the survey responses, there
were three general reasons identified for why communication between the
robot and human is necessary. Communicating the robot’s intent to the human
is the most commonly stated purpose for communication. A secondary purpose
of communication is to always make a human aware of the state of the robot.
This includes the task it is performing, as well as its operating status, power
status, controller state, health, environmental concerns, and other errors or
faults. Similarly, a third purpose is to communicate the details of the
environment and task that the robot is operating in, including awareness of the
presence and actions of humans in its surroundings.
68 of 95
people may expect humanoid robots to display communication behaviours that
are at the intersection of machine and human capabilities. Further discussion
on this theme can be found in the Communication Behaviours section.
The strategy to generate the communication also varies, from the ability to hold
a basic conversation, to just confirmation of commands when received or
interjection when safety is a concern, such as shouting “STOP”. The overall
purpose is for nearby humans to have a clear understanding of what the robot
is about to do. Essentially, every situation has an associated type and level of
communication that is required to safely perform the task, and the robot must
meet this requirement. Additionally, the communication capability changes
further based on whether the task is in a domestic, medical, industrial, service,
or public environment. Small details such as color, surface finish, lights, etc., can
indicate the type of application that the robot is probably going to be used for
(and, ideally, has been designed specifically to fulfill).
Communication Types
69 of 95
TA B L E X2: S U M M A RY OF H U M A N - R O B OT
CO M M U N I CAT I O N TYPE S
70 of 95
While most of the survey responses specify how the robot should communicate
with humans, several people also state that the robot should be able to interpret
commands from humans as well and be able to respond to them. Robots
should also have an accessible way for nearby humans to halt or stop
undesirable behavior. Based on these responses, users generally expect
humanoid robots to exhibit emotional intelligence, including recognizing and
responding to human emotions, and to engage in meaningful, human-like
conversations. Therefore, advanced multimodal communication (visual, audio,
physical) is essential for conveying robot intent and status, and for facilitating
natural interaction with surrounding humans in the environment.
71 of 95
Discussion and Recommendations
Building on this, it is worth noting that many responses set extremely high
expectations of the minimum levels of required communication capability.
People expect humanoid robots to be able to communicate at the same level of
competency as humans, including being able to understand natural language
and gestures, and then be able to reproduce both by themselves.
Even more so, some responses specified that robots should be able to know
everything about what actions are being performed, be able to reason at a high
level about why it is doing it with “sound logic”, and then be able to
communicate that reasoning if asked. While it is true these capabilities would
be ideal for clear communication, they may be beyond where the minimum
72 of 95
requirements should be set. Even the same responses that detail how people
have too high expectations for humanoid robots still set high expectations for
standards for the robots.
73 of 95
intent, state, task, and perception of the environment. This indicates that the
majority of users want to be aware of, or able to provide consent for, upcoming
robot actions.
When robots and humans share physical space, communication abilities may
be especially critical for humanoid robots, for example, when communicating
robot intentions (such as upcoming motions and behaviours) and a robot’s
awareness of people’s presence in its vicinity, as well as its understanding of
their actions or intentions.
74 of 95
Given how human-to-human communication can sometimes be confusing,
how a humanoid robot is designed to communicate robot intent, awareness of
humans, potential contact, and to respond to contacts (as focused on in the
survey) needs to be carefully considered to ensure clear communication. To that
point, survey respondents suggested a simultaneous combination of visual,
audio, and physical communication modalities. In effect, using multiple types of
signals in coordination may help produce clear communication.
Survey respondents also indicated a strong expectation that the way humanoid
robots are made to communicate with humans would borrow from typical
human-to-human communication behaviors. However, they did not exclude
the use of machine-like communication behaviors. For example, colored lights
or beeps from a robot may not always clearly convey the intended information
to untrained people on their own, but they could be part of a robust and
successful communication system when combined with other behaviors.
75 of 95
Aurally, speech (potentially in conjunction with motions) may be used to provide
awareness of upcoming robot actions to nearby humans (e.g., “on your left”), but
it could also be used to seek consent or to negotiate actions with humans, given
the context (e.g., “May I move ahead?”). However, communication through
speech may not always function in loud environments or when communicating
with humans who have hearing impairments; additional communication
modalities would be beneficial in these cases. Survey respondents also
suggested that beeps or tones could be used before each movement to provide
awareness of upcoming motions, provided they do not negatively impact or
cause fatigue in the people the robots interact with.
Physically, slowing down robot motion around humans is often done to ensure
safety, but may come with a tradeoff on productivity. As suggested by some
respondents, perhaps beyond communicating intent, robots could also
communicate to surrounding humans how to move around them, to minimize
slowdowns. Haptic wearables, for example, may be used to enhance human
awareness of robot actions (e.g., vibrating when a robot is nearby).
Lights can be used as a visual indicator, but just as for robot intent, standards
may need to be defined. Lights may change color when a person is detected in
the vicinity; they may be used to indicate that the robot is listening and
processing, or they may be used in other ways. Body language, including facial
expressions, gaze direction, and gestures, may be used to indicate awareness of
the presence of humans in the form of a greeting, for example, with a smile, a
brief look toward a person, a wave, or a head nod. Communication may be more
76 of 95
involved, for example, directing the robot's head and gaze towards people, to
show awareness of their presence, and then tracking their movements to
communicate awareness of people’s actions. A display screen could also be
used to inform people of the robot’s awareness. Robot behaviours may,
however, be adjusted to avoid generating feelings of unease in humans who
may feel observed if a robot is “staring” at them.
77 of 95
robots are interacting with untrained individuals (or trained individuals who
may not be relied upon to remember the training received), human-like robot
behaviours may be more conducive to providing effective communication. This
would help ensure that communication signals are designed to clearly and
intuitively convey an intended message (as opposed, for example, to having
several warning lights and buzzers going off without an obvious meaning).
Visual signals may include the use of light indicators, perhaps following
industry-standard safety color codes for ease of interpretation. Intensity, color,
and flashing frequency of lights may be modulated to convey urgency.
Additionally, robot gestures may be used, such as deliberate movements that
convey the need for caution (e.g., raising the hands, turning the head to point
the gaze towards a potential contact location) or that convey the intended robot
motions and pathway (as discussed in Communicating Robot Intent). Robot
78 of 95
facial expressions could help convey the need for caution (e.g., moving
eyebrows, eyes, and mouth to display surprise). A screen display may also be
used to communicate an intended contact via text.
Audio signals may include the use of speech or electronic sounds (such as
beeps, boops, and buzzes) to warn humans of an impending contact or to
communicate an intended contact. In both cases, volume, pitch, and speed may
be modulated to convey urgency.
Visual signals may include the use of lights, although they may have a limited
ability to convey a required meaning when used on their own, as discussed in
previous subsections. Robot gestures and facial expressions may be used, such
as deliberate movements that convey acknowledgment of or apologies for the
79 of 95
contact (e.g., turning the head towards the location of the contact, moving the
hands apologetically). A display screen could also be envisioned to provide
acknowledgments, apologies, or context to a human through text.
Audio signals may include speech, such that a contact may be acknowledged or
apologized for, and such that context may be gained or provided through verbal
communication. Electronic sounds may also be used, although they may have
limited ability to convey the required communication when used on their own.
Discussion
80 of 95
that facilitate clear communication and that can be adapted to different users
(offline and online), while including options for plain language explanations
through speech and/or digital display.
Any given robotic system intended to be used by or around people must exhibit
safe behaviors while in operation, and humanoid robots may come with their
own set of challenges due to their interactive nature, along with their high
complexity, mobility, and power.
The survey invited input on factors impacting the safety (or the perception
thereof) of robots performing tasks with or around people. The survey did not
provide prompts that encouraged respondents to consider safety in any
particular way, but instead encouraged feedback based on their expectations of
safe interactions. Responses were largely focused on physical safety and
touched upon topics including the definitions of safe behaviors, the
characteristics of robots that promote trust, and the reduction of risk due to the
design, functionality, and operating environment of robots. That is not to say
that psychological safety in these interactions should not be considered. Rather,
it indicates a blind spot in how most people think about safety around robots,
and calls for deliberate attention to be brought to this aspect of safety, including
in the topics covered below: how to define a safe and trustworthy humanoid
robot, what behaviours make a humanoid safe for HRI, and the implications of
human supervision.
81 of 95
Defining a “Safe” Humanoid in the HRI Context
For humans to be safe in the proximity of a humanoid robot, the robot needs
awareness of its surrounding environment and the people in it, such that it can
perceive humans and the objects that can be involved in human-robot
interaction, anticipate human actions and potential chain of events in a
dynamic environment, and react appropriately to prevent hazardous situations.
To remain mobile while carrying loads, a humanoid robot’s motors may also be
more powerful than those in traditional cobots. Maintaining safety in this case
may call for the implementation of robust safety systems, such as fail-safe
82 of 95
mechanisms, error detection, safe stop, and error recovery mechanisms that are
designed with the assumption that humans may be nearby. The presence of
humans may also need to be taken into account for the robot to safely respond
to power loss/fluctuation, localization errors, and uncertainty in dynamic
situational awareness.
While the most critical concerns relate to physical safety, psychological safety
may need further consideration. In particular, how safe a robot is may not
always directly correlate with how much humans trust a robot, which will affect
human-robot interactions as described next.
83 of 95
Minimum Safety-Specific Behaviors for Human
Environments
84 of 95
secure grasping and releasing to prevent hazards. Rigorous testing and
validation of all safety features in realistic scenarios will be crucial to guarantee
safe operation.
Pinching and crushing hazards can manifest when movable objects are put
near immovable features. Changes in operational conditions (e.g., lighting,
surface textures, and clutter), some of which may be dynamically introduced by
human activities, can make it difficult–if not impossible–to safely and reliably
move through a given environment.
85 of 95
with nearby people through the intermediary of the robot. The human
supervisor's role will be fundamentally shaped by the level of environmental
uncertainty.
For a system as complex as a humanoid robot, the design of the user interface
is critical to (i) adequately communicate the robot’s capabilities and
limitations, but most importantly, to (ii) ensure the supervisor is not
overloaded (mentally, physically, temporally, …) and that their involvement does
not introduce further hazards.
86 of 95
Humanoid robots also need the ability to communicate their intentions, state,
and awareness of the environment. A variety of signals may be considered in
communication implementation, involving various modalities including audio
(verbal or nonverbal), visual, or physical signals, whether used explicitly,
implicitly, or through external devices. However, the interactive capabilities of a
humanoid robot need to be adapted to the needs and abilities of the individuals
with whom it will interact, and the environment in which it will do so. Special
attention to accessibility is necessary when working with individuals with
disabilities or members of vulnerable populations.
When humanoid robots share space and interact with humans, functionality
and reliability are critical, as are the inclusion of robust safety mechanisms,
including human supervision for uncertain situations. In some scenarios, the
benefits of robotic assistance may need to be balanced with potential harms,
physical or emotional, that may occur during human-humanoid robot
interaction. As part of risk assessment, robot malfunctions ranging from minor
issues to severe physical or emotional harm will need to be considered. How to
define acceptable rates of humanoid robot malfunctions within different use
cases remains an open question.
Concerns relating to privacy, data collection, and transparency were only briefly
touched upon, but will affect the trust people place in their interactions with
humanoid robots. It is also worth reinforcing that the themes covered in this
chapter were those that the team considered the most critical. However,
additional concerns about interacting with humanoids may have been missed
and may surface as humanoid robots are being developed, tested, and
deployed.
88 of 95
Report Summary: Building a Standards Framework for
Humanoid Robots – Classification, Stability, and
Human-Robot Interaction
88 of 95
For SDOs, this classification system can serve as the “table of contents” for
future standards. It allows committees to map which standards are broadly
applicable (e.g., functional safety from ISO 13849) and which need humanoid-
specific extensions (e.g., balance safety, fall-response behaviors).
If classification is the foundation, then stability becomes the obstacle that must
be overcome for humanoids to operate effectively in shared human spaces.
Unlike wheeled or fixed robots, humanoids constantly deal with managed
instability; even powered-down robots can fall, which creates inherent hazards.
For SDO members, stability testing and safety validation must be treated as
intertwined efforts. ASTM and IEEE can lead the development of repeatable test
methods and quantifiable metrics, while ISO and IEC integrate these into
regulatory-grade safety standards.
89 of 95
Human-Robot Interaction: Managing Risk and
Perception
From a safety perspective, humanoids introduce indirect risks that are not
covered by traditional robot safety standards. Automating a process with
humanoids can alter workflow pacing and repetition, thereby increasing
musculoskeletal and cognitive risks for human workers in both pre- and post-
automation tasks.
From a perception standpoint, technically safe motions can still feel unsafe; fast
limb swings, sudden steps, or a robot standing too close can cause discomfort,
especially in public environments where bystanders are untrained.
Classification and stability standards directly inform these HRI standards. For
example, a humanoid classified for public use would require higher stability
thresholds, stricter fall-response requirements, and interaction-specific body
language guidelines than one confined to a closed industrial setting.
The interdependence of classification, stability, and HRI highlights the need for
a coordinated, multi-SDO approach rather than piecemeal adaptations of
existing standards. A suggested pathway includes:
90 of 95
• Adopt a shared taxonomy across SDOs to define humanoid types, guiding
which existing standards can apply and where gaps remain.
• Parallel Development of Stability Metrics and Test Methods
• ASTM and IEEE can lead to task-based performance test method creation
(e.g., walking, manipulation under disturbances).
• IEEE can standardize stability metrics for predictive and instantaneous
balance assessment.
• Integration into Safety Standards
• ISO and IEC can develop application-specific safety thresholds,
incorporating test methods and metrics into regulatory safety validation.
• HRI and Perception-Based Standards
• Build on classification and stability results to create interaction guidelines
addressing both physical and psychosocial safety.
• Centralized Coordination
• Establish a joint working group spanning ISO, IEEE, and ASTM to ensure
alignment. A shared roadmap would prioritize stability-related standards
first, as they are the primary barrier to safe adoption.
The three pillars are not isolated; they are mutually reinforcing. Classification
clarifies what type of humanoid is being evaluated, stability standards prove
that it can function safely, and HRI standards ensure it does so in ways that
humans find acceptable and trustworthy.
Until these elements are developed in tandem, humanoids will remain limited
to controlled environments and pilot programs. But with a coordinated
standards effort, SDOs have the opportunity to build the framework that will
make humanoids reliable, certifiable, and ultimately, deployable in the diverse
human spaces for which they are designed.
91 of 95
Special Thanks To The Following Contributors Who
Provided Insights Into The Development Of This
Report
92 of 95
Bibliography (Extracted from
Footnotes)
1. In this document, ‘self-balancing’ and ‘actively-balancing’ are used as
interchangeable terms when referring to a robot. They both describe a robot
that only remains upright through active, powered balance.
2. ISO 12100:2010 Safety of machinery — General principles for design — Risk
assessment and risk reduction
3. IEC 60204-1 (Consolidated version 2016+AMD1:2021) Safety of machinery -
Electrical equipment of machines - Part 1: General requirements
4. ISO 10218-1:2025 Robotics — Safety requirements Part 1: Industrial robots
5. ISO 10218-2:2025 Robotics — Safety requirements Part 2: Industrial robot
applications and robot cells
6. ISO 13482:2014 Robots and robotic devices — Safety requirements for
personal care robots
7. ISO 13850:2015 Safety of machinery — Emergency stop function — Principles
for design
8. ISO 13855:2024 Safety of machinery — Positioning of safeguards with
respect to the approach of the human body
9. IEC 61508 Functional safety of electrical/electronic/programmable electronic
safety-related systems - (Part 1 to 7)
10. ISO 13849-1:2023 Safety of machinery — Safety-related parts of control
systems Part 1: General principles for design, and ISO 13849-2:2012 Safety of
machinery — Safety-related parts of control systems Part 2: Validation
11. ANSI/RIA R15.08-1-2020 Industrial Mobile Robots - Safety Requirements -
Part 1: Requirements for the Industrial Mobile Robot
12. ANSI/ITSDF B56.5-2024 Safety Standard for Driverless, Automatic Guided
Industrial Vehicles and Automated Functions of Manned Industrial Vehicles
13. ISO/TR 20218-1:2018 Robotics — Safety design for industrial robot systems
Part 1: End-effectors
14. IEC 63310:2025 Functional performance criteria for AAL robots used in
connected home environment
15. ISO 7176-1:2014 Wheelchairs Part 1: Determination of static stability
16. ISO 7176-2:2017 Wheelchairs Part 2: Determination of dynamic stability of
electrically powered wheelchairs
17. ISO/TR 4448-1:2024(en) Intelligent transport systems — Public-area mobile
robots (PMR) — Part 1: Overview of paradigm
18. See full details and options for PFHd values in 5.3.3 of ISO 10218-1:2025
19. ISO 21448:2022 Road vehicles — Safety of the intended functionality
20. ISO 26262 - Road vehicles — Functional safety (parts 1 to 10)
21. ISO/AWI 25785-1 - Safety requirements for industrial mobile robots with
actively controlled stability (legged, wheeled, or other forms of locomotion) -
Part 1: Robots
22. ISO 9283:1998 Manipulating industrial robots — Performance criteria and
related test methods
23. ISO 18646 Robotics — Performance criteria and related test methods for
service robots (Part 1 to 8)
24. ISO/DIS 18646-5 Robotics — Performance criteria and related test methods
for service robots — Part 5: Locomotion for legged robots
25. F. Aller, D. Pinto-Fernandez, D. Torricelli, J. L. Pons and K. Mombaur, "From
the State of the Art of Assessment Metrics Toward Novel Concepts for
Humanoid Robot Locomotion Benchmarking," in IEEE Robotics and
Automation Letters, vol. 5, no. 2, pp. 914-920, April 2020
26. Ramuzat N, Stasse O, Boria S. Benchmarking Whole-Body Controllers on
the TALOS Humanoid Robot. Front Robot AI. 2022 Mar 4;9:826491.
27. C. Sferrazza, D.-M. Huang, X. Lin, Y. Lee, and P. Abbeel, “HumanoidBench:
Simulated Humanoid Benchmark for Whole-Body Locomotion and
Manipulation.” Robotics: Science and Systems Conference 2024
28. J. Baltes and S. Saeedvand, "Rock Climbing Benchmark for Humanoid
Robots," 2022 International Conference on Advanced Robotics and Intelligent
Systems (ARIS), Taipei, Taiwan, 2022, pp. 1-4,
29. Stasse O, Giraud-Esclasse K, Brousse E, Naveau M, Régnier R, Avrin G,
Souères P. Benchmarking the HRP-2 Humanoid Robot During Locomotion.
Front Robot AI. 2018 Nov 8;5:122.
30. “Eurobench project.” [Online]. Available: https://eurobench2020.eu/
31. W. Thibault, F. J. A. Chavez and K. Mombaur, "A Standardized Benchmark
for Humanoid Whole-Body Manipulation," 2022 IEEE-RAS 21st International
Conference on Humanoid Robots (Humanoids), Ginowan, Japan, 2022, pp.
608-615
94 of 95
32. V. Lippi, T. Mergner, T. Seel and C. Maurer, "COMTEST Project: A Complete
Modular Test Stand for Human and Humanoid Posture Control and Balance,"
2019 IEEE-RAS 19th International Conference on Humanoid Robots
(Humanoids), Toronto, ON, Canada, 2019, pp. 630-635
33. Peng, W. Z., Song, H., and Kim, J. H. (March 12, 2021). "Stability Region-Based
Analysis of Walking and Push Recovery Control." ASME. J. Mechanisms
Robotics. June 2021; 13(3): 031005.
34. J. A. Castano, E. M. Espuela, J. R. Rojas, E. M. Hoffman, and C. Zhou,
“Benchmarking standing stability for bipedal robots,” IEEE Access, p. 1, Jan.
2025, doi: 10.1109/access.2025.3529191.
35. C. Curtze, T. J. W. Buurke, and C. McCrum, “Notes on the margin of
stability,” Journal of Biomechanics, vol. 166, p. 112045, Mar. 2024, doi: 10.1016/
j.jbiomech.2024.112045.
36. T. Mergner and V. Lippi, “Posture Control—Human-Inspired Approaches for
Humanoid Robot benchmarking: conceptualizing tests, protocols and
analyses,” Frontiers in Neurorobotics, vol. 12, May 2018, doi: 10.3389/fnbot.
2018.00021.
37. ISO 13482:2014 “Robots and robotic devices — Safety requirements for
personal care robots” has some relevant references for human-robot
interactions but it is specifically dedicated to service applications and under
deep revision at the time of publication of this Report, so omitted as an option.
38. This Technical Specification is soon to be withdrawn as it is incorporated in
the revised series ISO 10218-1 (and 2):2025 “Robotics — Safety requirements -
Part 1 (and 2)”.
39. S. Kajita et al., "Impact acceleration of falling humanoid robot with an
airbag," 2016 IEEE-RAS 16th International Conference on Humanoid Robots
(Humanoids), Cancun, Mexico, 2016, pp. 637-643,
40. Elo, S. and Kyngäs, H. (2008), The qualitative content analysis process.
Journal of Advanced Nursing, 62: 107-115. https://doi.org/10.1111/j.
1365-2648.2007.04569.x https://doi.org/10.1111/j.1365-2648.2007.04569.x
41. Greta Hilburn, UX Researcher and Designer, US Defense Acquisition
University
95 of 95
View publication stats