Security Aspects of Social Robots in Public Spaces
Security Aspects of Social Robots in Public Spaces
Systematic Review
Security Aspects of Social Robots in Public Spaces: A Systematic
Mapping Study
Samson Ogheneovo Oruma 1, * , Yonas Zewdu Ayele 2, * , Fabien Sechi 2 and Hanne Rødsethol 3
1 Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
2 Department of Risk, Safety, and Security, Institute for Energy Technology, 1777 Halden, Norway;
[email protected]
3 Department of Control Room and Interaction Design, Institute for Energy Technology, 1777 Halden, Norway;
[email protected]
* Correspondence: [email protected] (S.O.O.); [email protected] (Y.Z.A.)
Abstract: Background: As social robots increasingly integrate into public spaces, comprehending their
security implications becomes paramount. This study is conducted amidst the growing use of social
robots in public spaces (SRPS), emphasising the necessity for tailored security standards for these
unique robotic systems. Methods: In this systematic mapping study (SMS), we meticulously review
and analyse existing literature from the Web of Science database, following guidelines by Petersen et al.
We employ a structured approach to categorise and synthesise literature on SRPS security aspects,
including physical safety, data privacy, cybersecurity, and legal/ethical considerations. Results: Our
analysis reveals a significant gap in existing safety standards, originally designed for industrial
robots, that need to be revised for SRPS. We propose a thematic framework consolidating essential
security guidelines for SRPS, substantiated by evidence from a considerable percentage of the primary
studies analysed. Conclusions: The study underscores the urgent need for comprehensive, bespoke
security standards and frameworks for SRPS. These standards ensure that SRPS operate securely
and ethically, respecting individual rights and public safety, while fostering seamless integration into
diverse human-centric environments. This work is poised to enhance public trust and acceptance of
these robots, offering significant value to developers, policymakers, and the general public.
Keywords: social robots; human–robot interaction; security; public space; cybersecurity; privacy; safety
Citation: Oruma, S.O.; Ayele, Y.Z.;
Sechi, F.; Rødsethol, H. Security
Aspects of Social Robots in Public
Spaces: A Systematic Mapping Study. 1. Introduction
Sensors 2023, 23, 8056. https:// The world has witnessed a significant surge in deploying social robots in public spaces
doi.org/10.3390/s23198056 (SRPS) in recent years [1]. These SRPS are designed to collaborate with humans and
Academic Editor: Wataru Sato have found applications across diverse environments, including retail [2], healthcare [3],
education [4], and public services [5]. As SRPS function autonomously in environments
Received: 22 August 2023 characterised by frequent and complex human interaction, security issues surrounding
Revised: 11 September 2023
their deployment are becoming increasingly critical.
Accepted: 20 September 2023
Social robots (SRs) are autonomous, physically embodied agents that interact and
Published: 24 September 2023
communicate with humans or other autonomous agents based on social behaviours and
predefined rules using their sensors and actuators [6]. They represent a transformative
movement in robotics that has the potential for profound societal impacts, pending their
Copyright: © 2023 by the authors.
attainment of adequate levels of autonomy, AI capability, safety, and security for widespread
Licensee MDPI, Basel, Switzerland. utilisation [7].
This article is an open access article However, integrating robots into public and private domains introduces the challenge
distributed under the terms and of assuring the security and safety of human interactions [8]. The complexities of public
conditions of the Creative Commons spaces, with their intricate and unpredictable nature, raise significant stakes involving
Attribution (CC BY) license (https:// various stakeholders and interaction dynamics [9]. SR could potentially amass and analyse
creativecommons.org/licenses/by/ vast amounts of personal information, akin to social media platforms and digital smart-
4.0/). phone assistants. Notably, these robots present a unique challenge: users may have limited
control over data collection processes, creating a potential imbalance between technology
and individual privacy rights [10].
The rising prominence of SRPS accentuates the need for an in-depth, systematic
exploration of their security facets. There is an urgent call to craft robust security standards
that cater specifically to these robots, setting them apart from industrial variants. This
study addresses this void by offering an exhaustive analysis of SRPS security dimensions.
Numerous studies have underscored the importance of safety, security, privacy, and ethical
considerations in SRPS [11,12]. While some have delved into the threat landscape and
attack vectors [5,8], others concentrated on robotic cybersecurity [13]. Yet, comprehensive
research covering all security aspects of SRPS remains sparse.
This systematic mapping study (SMS) is designed to review and analyse the existing
body of literature related to the security of SRPS. It seeks to identify and evaluate the current
security measures, detect existing gaps and issues, and propose a set of comprehensive
guidelines that could serve as a foundation for future security standards for SRPS. In doing
so, this study aims to provide valuable insights and direction for researchers, policymakers,
robot developers, and the broader community.
The main contributions of this study include the following:
1. Comprehensive literature analysis: This study provides a thorough and systematic
review and analysis of the existing literature on the security aspects of SRPS, examin-
ing academic papers, reports, and standards across multiple databases. This forms a
robust and detailed map of the current knowledge base.
2. Identification of key security themes: The study identifies and categorises the key
security themes pertinent to SRPS, including physical safety and integrity, data pri-
vacy and confidentiality, communication security, ethical considerations, and others.
This thematic framework helps to structure the complex and diverse security issues
associated with SRPS.
3. Assessment of existing standards: The study conducts a detailed assessment of ex-
isting safety and security standards, revealing gaps where these standards—often
developed for industrial robots—are inadequate for SRPS due to their unique opera-
tional environments and challenges.
4. Proposal for new security guidelines: Based on the literature analysis and existing
standards, this study proposes a set of comprehensive security guidelines specifically
tailored to SRPS. These guidelines are designed to be actionable and can be used as a
baseline for developing formal security standards for SRPS.
5. Insights into cultural and ethical implications: The study sheds light on the broader
cultural and ethical implications of deploying social robots in public spaces, fostering
awareness of the need for SRPS to respect human rights, privacy, and social norms.
6. Highlighting future research directions: This study outlines several future research
paths for advancing the field, such as exploring cultural differences in SRPS accep-
tance, developing standardised testing protocols for these robots, and analysing
real-world SRPS deployments.
7. Value to stakeholders: The study offers invaluable insights for various stakeholders,
including policymakers, roboticists, industry professionals, and the general public.
The study provides a foundational resource for policymakers and industry profes-
sionals to inform the development of regulations and standards. For roboticists and
developers, it offers a clear framework for designing and deploying SRPS with secu-
rity at the forefront. For the general public, it aims to raise awareness of the potential
risks and benefits associated with SRPS.
The remainder of this paper encompasses a review of the pertinent literature on the
security of SRPS (Section 2), an account of the study’s methodological approach, including
the literature search strategy and analysis framework (Section 3), a detailed presentation
and discussion of the findings segmented by key themes and their implications (Section 4),
and a concluding overview accentuating the study’s primary insights while proposing
directions for future research in this domain (Section 6).
Sensors 2023, 23, 8056 3 of 29
2. Related Works
As SRs increasingly become integrated into public spaces, a growing body of literature
delves into the various aspects of their deployment. This section reviews relevant works in
security, safety, privacy, ethics, and regulatory standards associated with SRPS.
wide array of individual studies to offer a comprehensive perspective that is greater than
the sum of its parts.
Firstly, the systematic mapping study delineates and highlights the critical areas
of SRPS, including safety standards, data privacy, ethical considerations, and human-
centric interaction protocols, fostering a deeper understanding of the multifaceted security
landscape surrounding SRPS. It pinpoints the areas where prevailing standards—primarily
designed for industrial robots—fall short in addressing the unique challenges posed by
social robots in public spaces.
Secondly, it sheds light on the emerging trends and notable developments in the field,
presenting a chronological view of the research trajectory, which individual studies might
not provide. This gives readers an understanding of the evolutionary path of research in
this domain, helping identify both the peaks of heightened research activity and periods of
relative stagnation.
Moreover, it spotlights pivotal works and benchmark studies in the sector—like
Bryson et al.’s heavily cited paper—helping researchers quickly identify the cornerstone
literature in the field and understand the pivotal discourses that have shaped the current
understanding of SRPS security aspects.
Additionally, mapping out the thematic areas prominently covered in primary studies
offers a snapshot of the focal points of contemporary research, presenting a cohesive picture
of the prevailing academic discourse and allowing readers to grasp the central themes
dominating the SRPS narrative quickly.
Lastly, it lays out a roadmap for future research, identifying pressing questions that
remain unanswered and proposing potential directions for further exploration, thereby
acting as a compass that guides forthcoming research to address gaps and venture into
unexplored territories.
Thus, the review paper acts as a pivotal resource, weaving individual research threads
into a rich tapestry that offers readers an in-depth understanding of the state of SRPS
security research, guiding scholars and practitioners to navigate the complex landscape
with an informed, comprehensive perspective. It brings crucial insights and perspectives
that are not readily apparent when individual papers are viewed in isolation, fostering a
nuanced understanding grounded in a rich, multifaceted view of the existing literature.
3. Methodology
The methodology for this systematic mapping study (SMS) on “security aspects of
srps” is predicated upon the guidelines stipulated by Petersen et al. [20,21] for conducting
an SMS, supplemented by Kitchenham’s [22,23] guidelines for a systematic literature review
(SLR) and insights from Weidt and Rodrigo [24]. Recognising the overlaps between SMS
and SLR, these guidelines are robust foundations for the study.
As illustrated in Figure 1, our proposed methodology unfolds across three stages:
planning, conducting, and reporting the SMS. Each stage comprises four key activities,
carefully calibrated to ensure a comprehensive and valid exploration of the subject matter.
1 PLANNING PHASE
Need for the SMS Study Identification Data Extraction Validity Threats
SMS Goal Search Strategy Data extraction form Threats to validity
Research Questions Inclusion/Exclusion Research trend and & mitigations strategies
Quality Assessment Mapping classification
2 CONDUCTING PHASE
3 REPORTING PHASE
Research Questions
To effectively address the above objectives, two groups of research questions (RQ)
were meticulously formulated:
Group 1: Unfolding Research Trends and Methodologies Concerning Security Aspects
of SRPS
• RQ 1.1 How has the research focus on the security aspects of SRPS evolved over time?
This question aims to provide a historical overview and trace the development and
shifts in focus, offering insights into the trajectory of the research field.
• RQ 1.2 How can insights into the influence and impact of these studies be extracted
from their citation network? Understanding the citation network can help identify
key studies that have shaped the field, providing an understanding of their relevance
and impact.
• RQ 1.3 What methodologies, types of studies, and thematic areas predominantly
characterise research on the security aspects of SRPS? By identifying the methodologies
and types of studies used, this question seeks to understand the approaches that have
been most effective and prevalent in studying this topic.
Sensors 2023, 23, 8056 6 of 29
Filtering Strategy
In order to further refine our search results, we incorporated a two-stage filtering process
as suggested by Petersen et al. [20,21] and Kitchenham and Brereton [22,23]. The first stage
involved an examination of the titles and abstracts of potential studies. The second stage
required a full-text reading of these studies. To expedite this process, when the title and
abstract provided sufficient information to reach a decision, only the introduction and
conclusion of a given study were consulted.
• Publication type (IC3): We only include peer-reviewed studies to ensure the credibility
of the information used in our SMS.
• Language (IC4): To standardise the analysis process and eliminate language-related
biases, we only include studies published in English.
• Publication period (IC5): Given the rapidly evolving nature of the field, we focus on
studies published between 2016 and 2022 to ensure relevance and recency.
Our exclusion criteria include the following:
• Out of scope (EC1): We exclude studies that do not align with our specific domain of
interest. For example, we omit studies concentrating on the appearance, acceptance,
trust, and application of social robots unless they specifically address security aspects.
• Secondary studies (EC2): Secondary studies are excluded to ensure that our SMS is
built upon primary research.
• Non-English language (EC3): Non-English studies are excluded to eliminate any
potential inaccuracies stemming from translation.
• Duplicate studies (EC4): Duplicated studies or extended versions of original papers
are excluded to avoid redundancies.
• Inaccessible studies (EC5): To maintain transparency and reproducibility, any studies
that are not openly accessible are excluded.
• Front and back matter (EC6): Any search results that only contain front or back matter
are excluded, as they do not contribute meaningful data.
• Non-peer-reviewed papers (EC7): Studies that have not undergone the peer-review
process are excluded to ensure that only quality research contributes to our SMS.
Quality Assessment
The WoS core collection is a trusted source of high-quality research. In addition to this,
our methodology incorporates a snowballing strategy, checking the references of included
studies to locate further relevant work. This helps to ensure a comprehensive representation
of the field. The introduction of several exclusion criteria ensures that only publications of
good quality are included in our study. While Petersen, Kitchenham, and Weidt and Silva
emphasise that the quality assessment is not a priority for SMS, we take this step to further
ensure the reliability of our results, as our study intends to provide not just an overview
but a comprehensive, high-quality mapping of the emerging research area.
Selection Bias
Selection bias may occur during the identification of primary studies due to the
inclusion/exclusion criteria or the chosen databases.
Mitigation: We used well-defined inclusion/exclusion criteria and a comprehensive
search string to minimise selection bias. Further, the Web of Science (WoS) Core Collection,
a broad and globally recognised database, was employed to ensure the wide coverage
of literature.
Interpretation Bias
There might be a bias in interpreting the results of the extracted data, leading to
skewed findings.
Mitigation: Findings were cross checked among the research team members to avoid
individual interpretation bias. Disagreements were resolved through consensus or by
involving a third researcher.
Quality Assessment
There is a risk that the quality of primary studies may affect the quality of the SMS findings.
Mitigation: We included only peer-reviewed studies and applied specific exclusion
criteria to eliminate non-peer-reviewed papers. Snowballing was performed to increase the
sample representation and ensure a good-quality SMS.
By pre-emptively considering these threats and their mitigations, we aim to ensure
the validity and reliability of the study’s findings.
Table 1. Search strings, result counts, and URLs for primary study retrieval.
Table 2. Backward snowballing process summary with reference counts and study IDs
4. Results
This section presents the results of our study in the context of our initial research
questions. Of the 176,281 initial studies, only 14 directly addressed SRPS and security.
Sensors 2023, 23, 8056 11 of 29
The backward snowballing of these studies’ references led to 26 additional papers, culmi-
nating in 40 core studies for our systematic mapping.
4.1. Group 1: Unfolding Research Trends and Methodologies Concerning Security Aspects of SRPS
4.1.1. How Has the Research Focus on the Security Aspects of SRPS Evolved over Time
In addressing the evolution of research focus on security aspects of SRPS, we analysed
the temporal distribution of our primary studies, considering their publication year, study
type, and the security topics of focus.
From our analysis, the composition of the primary studies indicates a predominant
preference for journal articles, making up 25 of the total, followed by 12 conference papers,
two book sections, and a single book. This suggests that most research on this topic is
published in peer-reviewed journals, indicating the academic significance and rigour of
the subject matter. The summary of these studies is presented in Table 3, which details
the 40 primary studies, including their unique study identifiers, authors, titles, and cita-
tion counts.
Regarding the source of these studies, most contributions originate from SpringerLink,
with 13 papers. This is followed by IEEE Xplore with nine papers, while ACM and MDPI
each contributed four. Other notable contributors include ScienceDirect and Science Robotics,
with three and two papers, respectively, while Taylor & Francis, Emerald, IOActive, JMIR,
and AJIS each have one. This wide range of sources illustrates the cross-disciplinary
interest in SRPS security aspects, extending beyond the traditional computer science and
engineering fields.
Looking at the publication years of the primary studies, we see a relatively steady
increase from two papers in 2016, peaking at nine papers in 2017, and then a more gradual
rise to eight papers in 2019. The subsequent years (2020 and 2021) both contributed
six papers, with a slight decrease to five papers in 2022. This pattern may suggest an
increasing interest and recognition of the importance of SRPS security aspects within the
research community, with 2017 marking a notable spike in publications. The relatively
consistent numbers from 2019 to 2022 suggest that this field of study maintains its relevance,
ensuring a continuous stream of research despite the emergence of COVID-19 restrictions
that affected all research activities. This may also account for the slight decrease in 2022.
A comprehensive summary of the visual representation of the temporal distribution of
these papers is shown in Figure 2.
This in-depth analysis not only answers our research question regarding the evolution
of focus on the security aspects of SRPS but also presents key insights into the state and
progress of the research field.
Figure 2. Visual representation of SRPS security aspects research papers. (a) Number of and types of
papers per digital library, (b) number of papers per year.
Sensors 2023, 23, 8056 12 of 29
Table 3. Temporal distribution and details of primary studies on security aspects of SRPS.
P14S25 2019 Chatterjee [64] Impact of AI regulation on intention to use robots: From citizens and
government perspective
The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal
P14S26 2016 Pagallo [65]
Regulation by Design
Sensors 2023, 23, 8056 13 of 29
To further address the research question, “How can insights into the influence and
impact of these studies be extracted from their citation network?” it is essential to consider
the authors’ affiliations and their respective backgrounds, either from academia, industry,
or both. Table 4 shows that most of the studies originate from academic institutions. This
underscores the fact that foundational research, theories, and principles in the domain
of SRPS security largely arise from the academic sphere. Such research often lays the
groundwork, offering insights, methodologies, and frameworks upon which practical
applications can be built.
4.1.2. How Can Insights into the Influence and Impact of These Studies Be Extracted from
Their Citation Network?
To answer the research question at hand, it is crucial to examine the citation patterns of
the primary studies listed in Table 5. This analysis will illuminate the influence and impact
of these studies in the field of socially relevant public space (SRPS)security. We can gauge
this influence by looking at both the total number of citations each paper has received and
their average annual citations on Google Scholar on 3 August 2023, the latter taking into
account the duration a paper has been published and available to be cited.
Bryson et al.’s “Of, for, and by the people: the legal lacuna of synthetic persons”
(P06S9) stands prominently at the pinnacle of our citation list. Released in 2017, this study
has garnered 292 citations, averaging an impressive 49 citations annually. This substantial
citation frequency underscores the paper’s profound impact on the discipline, implying
that its legal perspectives have been instrumental in subsequent research. This further
Sensors 2023, 23, 8056 14 of 29
emphasises that the legal dimensions of SRPS remain a crucial focal point in discussions
surrounding SRPS security.
350
300
Total Citaon Average Annual Ci a on
250
200
150
100
50
0
P07
P14
P02
P08
P03
P10
P11
P01
P05
P12
P06
P09
P13
P04
P06S11
P13S23
P10S18
P06S10
P06S12
P06S14
P13S22
P13S24
P14S25
P10S17
P10S19
P06S13
P11S20
P06S15
P10S16
P11S21
P14S26
P06S9
P03S3
P03S1
P03S2
P04S5
P05S8
P04S4
P05S6
P05S7
Table 6. Summary of research types and methodologies employed in our primary studies.
Solution proposal 18 P01, P03, P04, P05, P07, P12, P13, P03S2, P05S6, P05S7, P05S8, P06S10, P06S14, P11S21,
P13S22, P13S23, P13S24, and P14S25.
Philosophical 13 P02, P06, P10, P14, P03S1, P03S3, P06S9, P06S12, P06S13, P06S14, P10S16, P10S18, and
Types
P14S26.
Evaluation 7 P09, P11, P03S2, P04S5, P06S15, P11S20, and P13S23.
Validation 3 P08, P06S15, and P14S25.
Experience report 3 P06S11, P10S17, and P10S19.
Focus group 1 P04S4
Quantitative 10 P07, P08, P11, P13, P14, P05S7, P11S20, P11S21, P13S22, and P14S25.
Methods
Qualitative 15 P02, P06, P10, P03S1, P03S3, P04S4, P04S5, P06S9, P06S11, P06S12, P06S13, P10S16,
P10S18, P10S19, and P14S26.
Mixed 15 P01, P03, P04, P05, P09, P12, P03S2, P05S6, P05S8, P06S10, P06S14, P06S15, P10S17,
P13S23, and P13S24.
Research Types
Our SMS unveiled the following classifications of studies:
1. Solution proposals: These articles chiefly aim to identify challenges and propose
technical or methodological remedies. It is the predominant research type in our data
set with 18 studies (P01, P03, P04, P05, P07, P12, P13, P03S2, P05S6, P05S7, P05S8,
P06S10, P06S14, P11S21, P13S22, P13S23, P13S24, and P14S25).
2. Philosophical/conceptual analyses: These engage with the philosophical dimensions,
ethical nuances, or critical discussions related to the topic. Thirteen studies (P02, P06,
P10, P14, P03S1, P03S3, P06S9, P06S12, P06S13, P06S14, P10S16, P10S18, and P14S26)
belong to this realm.
3. Evaluation research/reports: These appraise specific facets or occurrences. There are
seven studies (P09, P11, P03S2, P04S5, P06S15, P11S20, and P13S23) in this classification.
4. Validation studies: These are dedicated to endorsing particular hypotheses or sys-
tems. Papers P08, P06S15, and P14S25 are representatives of this category.
5. Experience reports: These elucidate findings and lessons drawn from specific experi-
ences or enactments. Studies P06S11, P10S17, and P10S19 exemplify this category.
6. Focus groups: We identified a singular focus group, labelled as P04S4.
Research Methods
Through our SMS, we discerned the following methodologies utilised in the studies:
• Qualitative: These studies gravitate towards non-numeric data, leveraging observa-
tions, discussions, or narrative interpretations. A total of 15 papers (P02, P06, P10,
P03S1, P03S3, P04S4, P04S5, P06S9, P06S11, P06S12, P06S13, P10S16, P10S18, P10S19,
and P14S26) in our compilation adopted this approach.
• Quantitative: Here, the emphasis is on numeric data, frequently accompanied by
statistical scrutiny. In our data set, ten studies (P07, P08, P11, P13, P14, P03S2, P05S7,
P06S15, P11S21, and P14S25) predominantly employed this method.
Sensors 2023, 23, 8056 16 of 29
• Mixed: A significant portion of the studies, numbering 15 (P01, P03, P04, P05, P09,
P12, P03S2, P05S6, P05S8, P06S10, P06S14, P06S15, P10S17, P13S23, and P13S24),
amalgamated both qualitative and quantitative methods for a comprehensive analysis.
Thematic Insights
From our SMS, we derived seven core thematic areas. They are as follows:
1. Cybersecurity
• Encompasses a broad range of topics, from network, application, and cloud
security to user education, identity management, and cybersecurity regulations.
• Included papers: P03, P04, P05, P06, P07, P10, P13, P03S2, P06S10, P06S13,
P10S19, P13S22, P13S23, and P13S24.
• Represents 35% of our primary studies.
2. Safety
• Discusses structural safety, motion, fail-safe mechanisms, protection against
cyber–physical threats like stalking, emergency responses, and safety during
maintenance.
• Included papers: P01, P11S20, and P11S21.
• Represents 7.5% of our primary studies.
3. Privacy
• Delves into data collection, consent, anonymisation, transparency, and behavioural
privacy. Topics like data security and control have been grouped under cybersecurity.
• Included papers: P03, P04, P04S4, P04S5, P06S13, and P14S26.
• Represents 15% of our primary studies.
4. Reliability and continuity
• Covers aspects such as hardware and software reliability, maintenance, network
connectivity, fault tolerance, and user interface consistency.
• Included papers: P11, P13, and P05S6.
• Represents 7.5% of our primary studies.
5. Legal challenges
• Focuses on data privacy laws, cybersecurity standards, SR liability and insurance,
telecommunication regulations compliance, and public space-specific regulations.
• Included papers: P10, P11, P13, P05S6, P06S12, P06S14, and P06S16.
• Represents 17.5% of our primary studies.
6. Ethical concerns
• Encompasses themes like human autonomy, informed consent, justice, trans-
parency, and sustainability.
• Included papers: P02, P06, P10, P03S1, P03S3, P06S11, P06S15, and P10S18.
• Represents 22.5% of our primary studies.
7. Influence and manipulation:
• Explores user profiling, SR’s persuasive capabilities, and accountability mechanisms.
• Includes papers: P08, P09, P11, P12, P05S8, P14S25.
• Represents 15% of our primary studies.
Notably, many studies span multiple themes. We categorised each based on its
dominant focus, although some touch upon areas like cybersecurity, data privacy, and legal
aspects, making them relevant for various categories. An emergent theme also noted is
social awareness and campaigns related to SRPS, including studies like P02, P08, P09,
P03S1, P03S3, P06S9, and P14S25. It is worth highlighting that the dominant thematic areas
represented in our primary studies are cybersecurity (35%), ethical concerns (22.5%), legal
frameworks (17.5%), data privacy (15%), and user influence and manipulation (15%).
Research on the security aspects of SRPS is diverse and multifaceted, spanning from
highly technical solution proposals to philosophical debates. The predominant methodolo-
Sensors 2023, 23, 8056 17 of 29
gies range from mixed methods to qualitative and quantitative studies. At the same time,
the thematic areas broadly encapsulate technical security solutions, ethical implications,
legal and regulatory concerns, human–robot interaction behaviour, and privacy issues.
4.2. Group 2: In-Depth Examination of Security Aspects, Normative Guidelines, and Design
Principles in SRPS
4.2.1. Specific Security Aspects in SRPS Studies: Consistent Highlights and Definitions
To delve into this research question, we harnessed our findings from the Group 1
results, exploring the seven distinctive themes to elucidate the security facets of SRPS more
clearly. A meticulous analysis is available in our study’s Supplemental Materials, with each
claim substantiated by relevant quotations from the Microsoft document titled “Security
Aspects Repository” [25]. The security aspects consistently emphasised in our studies,
along with their definitions, are as follows:
1. Cybersecurity of SRs and users: This aspect primarily concerns safeguarding SRPS
systems, networks, and data from cyber threats. Common discussions in the studies
pertain to the unique security challenges in SRPS, a broad array of security services
ranging from secure bootstrapping and communication to data storage, software
updates, and device management, and threats such as unauthorised publishing,
unauthorised data access, and denial of service (DoS) attacks. A total of 21 papers,
accounting for 52.5% of our primary studies, cover this aspect.
2. Data privacy of users: Central to this domain is protecting user data, ensuring data
handling that respects individual rights. Major themes involve communication secu-
rity risk assessments, the enactment of access control policies, implications of AI and
robots on privacy, championing the principle of “Privacy by design”, and emphasising
that domestic robots should always respect user privacy expectations. This theme is
covered in 11 papers, making up 27.5% of our primary studies.
3. Physical safety of SRs and users: The focus here is on ensuring the safety of both
users and the physical structure of the robots. Studies underscore the significance
of safety, reliability, and continuity, particularly in socially assistive robots, and the
imperative for designing SRPS systems with user safety as a paramount concern. This
aspect is detailed in 11 papers, representing 27.5% of our primary studies.
4. Reliability and continuity of SRs: This theme emphasises the crucial need for robots
to function reliably and consistently. The importance of both reliability and continuity
is frequently underscored in contexts such as socially assistive robot scenarios, and the
discussions often touch upon the need for industrial control systems to remain opera-
tional even in challenging conditions. Six of our studies, which constitute 15% of the
primary research, delve into this facet.
5. Legal framework for SRPS: This dimension intersects the realms of technology and
legal compliance. Key discussions revolve around the necessity of adhering to es-
tablished legal norms pertinent to SRPS security, safety, and privacy. Additionally,
there is a focus on the debate surrounding robot legal personhood and challenges
associated with data recording and logging, especially when personal data are in play.
Fourteen of our papers, representing 35% of the primary studies, focus on this area.
6. Ethical Consideration for SRPS: This domain navigates the intricate waters of moral
and philosophical considerations associated with robot deployment. Research con-
sistently brings forth issues such as the inherent challenges in designing machines
capable of ethical decision making, the myriad ethical dilemmas users may face when
interfacing with AI technology, and the intertwined ethical and legal responsibilities
of robot behaviour. This theme finds mention in 16 of our papers, encapsulating 40%
of the primary studies.
7. User influence and manipulation: This more nuanced theme seeks to understand the
potential of robots to subtly shape or alter user behaviours, decisions, and perceptions.
Central to this discourse is the unwavering commitment to user security in all its facets,
from physical well-being to data integrity. The recurring motif emphasises robots
Sensors 2023, 23, 8056 18 of 29
Cybersecurity of SR and Users 21 P03, P05, P06, P07, P10, P12, P13, P03S2, P03S3, P04S4, P05S7, P05S8, P06S9, P06S10,
P06S13, P10S16, P10S17, P10S19, P13S22, P13S23, P13S24
Data Privacy of Users 20 P02, P03, P04, P05, P06, P07, P12, P13, P14, P03S1, P03S2, P03S3, P04S4, P04S5,
P05S7, P06S12, P06S14, P10S17, P14S25, P14S26
Physical Safety of SR and Users 11 P01, P11, P05S6, P06S10, P06S12, P06S13, P10S16, P10S17, P11S20, P11S21, P13S23
Reliability and Continuity of SR 6 P01, P12, P03S2, P06S10, P10S17, P13S23
Legal Framework for SRPS 14 P02, P10, P14, P03S2, P03S3, P06S9, P06S11, P06S13, P06S14, P10S16, P10S17,
P13S23, P14S25, P14S26
Ethical consideration for SRPS 16 P02, P06, P09, P10, P03S1, P03S2, P03S3, P04S5, P06S9, P06S11, P06S12, P06S13,
P06S15, P10S18, P14S25, P14S26
User influence and manipulation 5 P08, P09, P03S1, P05S8, and P06S13
for SRPS use cases and standards [10]. Almost all papers addressing cybersecurity
reaffirmed the need for communication security as a theme.
4. Authentication and authorisation: Only authorised personnel should have access
to the robot’s controls and data. User roles and permissions should be clearly de-
fined. Passwords or other authentication methods should be regularly updated and
changed. This theme is a subset of the cybersecurity of SRPS. Applicable security stan-
dards reaffirming this theme include (i) NIST SP 800-63 focusing on digital identity
guidelines, and (ii) IETF RFC 6749 (OAuth 2.0) and RFC 4120 (The Kerberos network
authentication service v.5), among others.
5. Operational transparency: The robot should clearly indicate when it is recording or
collecting data. Robots should be easily identifiable with visible markings or badges.
This theme is a subset of data privacy. Applicable standards include IEEE P7001-
2021 [67] focusing on measurable, testable levels of transparency for autonomous sys-
tems [68], and ISO/TS 15066 [69], focusing on collaborative robot systems and include
safety requirements that can be used as a foundation for transparency around safety.
6. Robustness against cyber attacks: The robot’s software should be regularly updated
to patch known vulnerabilities. There should be a mechanism to detect and respond
to any unauthorised intrusion or malware. For our SMS, this theme is a subset
of cybersecurity. Applicable standards include (i) ISO/SAE 21434—focusing on
cybersecurity risk management for autonomous systems, including threat analysis
and vulnerability assessments; (ii) ISO/IEC 27001 ong general information security;
(iii) IEC 62443 focusing on industrial network security, which can be applied for
broader robotic applications; and (iv) NIST SP 800-183 focusing on IoT security, which
can be adapted for SRPS.
7. Human interaction protocols: SR should have guidelines on interacting with differ-
ent age groups, especially vulnerable populations like children, people living with
disability, and senior citizens. SR should be designed to understand and respect social
norms and boundaries. This guideline is reaffirmed by P09, P12, P03S1, and P03S2,
among others of our primary studies. Most existing standards focus on users’ safety,
privacy and security. This theme needs special attention for the successful adoption
of SRPS.
8. Monitoring and reporting: Robots should be monitored for any irregular or unin-
tended behaviours. Any security breaches or unusual events should be logged and
reported promptly. Again, this theme is extensively covered by most cybersecurity
themes of SRPS.
9. Environment-aware operation: The robots should know their environment and ad-
just their operation mode accordingly. For instance, a robot should operate differently
in a crowded space than an empty one.
10. Regular testing and validation: The robot’s systems should be regularly tested to
ensure that they function correctly and safely. Various scenarios can be simulated to
validate the robot’s response to security and safety situations.
11. Ethical considerations: Always consider the ethical implications of deploying SR,
especially regarding privacy and human rights. Guidelines and policies should be in
place to prevent misuse or unethical behaviour by robots. Several calls, debates and
concerns about the ethical implications of SRPS need to be standardised.
By diligently following these guidelines, developers and operators of SRPS can proac-
tively establish a secure operational framework that prioritises the protection of individual
rights and the safety of the public. A visualisation of the above proposed security guidelines
is presented in Figure 4.
Sensors 2023, 23, 8056 20 of 29
Robustness against
Cyber attacks
Authentication Monitoring
& Authorization & Reporting
Communication Environment-aware
Security Operation
4.2.3. What Design Principles Are Proposed for Augmenting the Security of SRPS,
and What Contributions Do They Make to the Field?
The research question aims to explore the design principles proposed to enhance the
security of service robot and personal service robot systems (SRPS)and evaluate their con-
tributions to the field of robotics security. The findings indicate that 11 out of 40 studies, ac-
counting for 25%, directly addressed security design principles. Numerous security design
principles can be inferred from other studies, although they were not explicitly referenced.
The primary security design principles identified for SRPS and their contributions to
the field are as follows:
1. Security by design: This principle emphasises integrating security measures and
considerations into the system’s design and architecture from the outset. It ensures
that security requirements, controls, and mechanisms are incorporated throughout the
development lifecycle. This creates inherently secure and resilient systems, reducing
vulnerabilities and potential threats. Studies P03, P07, P13, and P13S22 explicitly
referenced this principle, while others indirectly suggested its application.
2. Privacy by design: This principle focuses on proactively integrating privacy consid-
erations into developing systems, products, and processes. It ensures that individuals’
privacy is safeguarded from the start, contributing to enhanced data protection, regu-
latory compliance, user trust, and ethical innovation. P06S15 and P14S26 explicitly
mentioned this principle, while other studies implied its application.
3. Human-centred design: This approach places users and their interactions at the core
of design processes, creating user-friendly and relevant solutions. It contributes to
security by ensuring user-friendly security features, usable authentication methods,
accessible security, and promoting user trust in technology adoption. Studies P09,
P12, P03S1, P03S2, and P03S3 directly referenced this principle.
4. Least privilege: This principle dictates granting users and processes the minimum
necessary access rights for their tasks. It limits attack surfaces, unauthorised actions,
lateral movement during attacks, and the impact of breaches. Studies P07 and P03S2
explicitly mentioned this principle, while others indirectly referred to it.
Additionally, other security design principles such as transparency, accountability,
secure software development, the separation of components, open design, defence in depth,
and fail-safe defaults were implied but not explicitly mentioned in the primary studies.
Sensors 2023, 23, 8056 21 of 29
In summary, the contributions of these security design principles to the SRPS field
include the creation of secure and resilient systems, protection of user privacy, user-friendly
security measures, regulatory compliance, and enhanced user trust. As the SRPS do-
main continues to evolve, more standardisation and regulations related to security design
principles are anticipated to emerge in the future.
5. Discussion
5.1. Principal Insights and Practical Applications Derived from the Study
Key findings from this SMS include the following:
• Emerging security and safety standards Our study emphasises the pressing need for
updated and comprehensive security and safety standards tailored to address the
unique requirements of SRPS. Current standards, predominantly based on industrial
robots, are ill-suited to govern the new social robots interacting closely with the public
in diverse environments.
• Data privacy and integrity As identified in our research, robust data privacy and
confidentiality measures are essential. With myriad potential data interactions between
SRPS and the public, strict adherence to data protection laws, such as GDPR, and the
implementation of strong encryption protocols are imperative.
• Human-centric interaction protocols Our study reiterates the need for clear and
ethical interaction protocols, particularly when robots interact with vulnerable popu-
lations. These protocols should be informed by a deep understanding of social norms
and human behaviour, advocating for respect and empathy in robot design.
• Responsive and adaptive operations SRPS need to be cognizant of their environments
and capable of adapting their behaviours accordingly. This ensures the safety of the
individuals they interact with and the robots’ integrity.
• Ethical considerations Our study underscores the burgeoning debate on the ethical
implications of SRPS. As these robots become more integrated into public life, they
must be designed and programmed to respect human rights and operate within clear
ethical boundaries.
The practical implications of our findings from this SMS are manifold, deeply influ-
encing several sectors, including robotics design, regulatory frameworks, and deployment
strategies. Let us delve into each one:
1. Robot Design
• Addressing safety and ethical considerations through human-centric interaction pro-
tocols: Our findings stress the need for SRPS to have clearly defined ethical
interaction protocols, especially when engaging with vulnerable population
groups. This implies that robots should be designed to understand and respect
human norms and behaviours, ensuring that empathy and respect are central to
their programming.
• Ensuring data privacy through robust security protocols: The highlighted need for
stringent data privacy and integrity measures necessitates that social robots be
equipped with sophisticated security protocols to guard against data breaches
and ensure compliance with the GDPR and national applicable laws.
2. Regulatory Frameworks
• Addressing legal aspects through tailored legal frameworks: The study suggests a
significant gap in the existing legal frameworks to accommodate the unique
challenges SRPS poses. Legislators can draw insights from this study to craft
laws that govern the operation and deployment of SRPS, addressing issues like
user influence and potential manipulations.
• Addressing ethical concerns through ethical oversights: Regulatory bodies would
benefit from establishing committees to oversee the ethical dimensions of SRPS,
ensuring that they operate within defined ethical boundaries and respect human
Sensors 2023, 23, 8056 22 of 29
rights. This could also involve formulating standardised testing and validation
protocols for evaluating SRPS before deployment.
3. Deployment Strategies
• Employing adaptive operations through responsive SRPS deployment: When deploying
SRPS, it is vital to ensure that they are cognizant of their surroundings and
can adapt their behaviours accordingly, safeguarding both the individuals they
interact with and maintaining their own integrity.
• Improving users’ experience through user-centric design: Insights from this study
should encourage developers to focus on enhancing the user experience, paying
attention to aspects like accessibility and the psychological impact SRPS could
have on individuals and communities.
• Shaping future research through trans-disciplinary collaboration Stakeholders in the
industry should foster collaborations with researchers to delve into the prospec-
tive research avenues identified in this study, working towards the secure, ethical,
and effective integration of SRPS in public spaces.
In summary, the practical implications of our findings can inform a holistic approach
towards the design, deployment, and regulation of SRPS, fostering a landscape where
social robots can safely and harmoniously coexist with humans, augmenting public spaces
while prioritising security and ethical considerations.
5.2. Conceptual Framework Illustrating the Interrelated Key Dimensions of Security in SRPS
Figure 5 outlines a detailed conceptual framework that encapsulates the pivotal security
dimensions of SRPS and delineates their intricate interrelationships with pertinent stake-
holders. This schematic representation is segmented into four cardinal phases—regulation,
production, operation, and interaction—each representing a critical stage in the SRPS lifecycle.
Output Input
Feedback to other stakeholders Implementation of applicable
a
e ra ethical frameworks
Input
Users' awareness and training Int te Continuous monitoring and updating
Output
Secure, safe and user-centric SRPS services Secure, safe and user-centric SRPS services
Figure 5. Visual framework illustrating SRPS security dimensions and stakeholder roles.
In the regulation phase, the onus rests upon regulators and policymakers to forge com-
prehensive standards and guidelines grounded in legal, ethical, and governance principles
pertinent to SRPS. These directives not only aim to safeguard security and safety but also
foster a nurturing environment for the SRPS ecosystem to flourish. Feedback loops are
instituted to refine these norms based on real-time insights and developments continually.
The production tier is steered by developers, designers, and manufacturers entrusted
with the task of meticulously embedding the legislated standards into the very fabric of
SRPS. This involves sculpting security frameworks and instituting privacy safeguards that
are both robust and adaptable, thus upholding the precept of security by design. This phase
leverages a profound understanding of the comprehensive threat landscape to devise resilient
mechanisms against emerging threats.
Sensors 2023, 23, 8056 23 of 29
At the operation echelon, SRPS business proprietors and system administrators come
into play, spearheading the efficacious implementation of the prescribed frameworks while
advocating for a user-centric approach. This phase is characterised by unyielding vigilance,
with regular monitoring and timely updates to mitigate evolving risks and ensure a secure
service delivery that stands the test of time.
The interaction phase envisages a collaborative role for the end users, nurturing them
to be savvy interactants through the dissemination of awareness and training pertaining to
SRPS services. This stage embraces the feedback derived from user experiences, fostering
a two-way dialogue to inculcate enhancements that are in sync with user expectations
and preferences.
By elucidating the synergistic relationships between these phases and the stakeholders,
Figure 5 serves as a beacon, guiding stakeholders at every juncture to foster a secure, ethical,
and user-friendly SRPS environment, thereby steering the field towards a future marked by
trust and mutual growth.
5.3. Delving into the Security Challenges of SRPS: Concrete Instances and
Forward-Looking Solutions
Delving deeper into the challenges faced in securing SRPS and addressing them is
pivotal in advancing the field. Let us break down these challenges along with potential
solutions and their implications:
1. Data privacy and integrity
• Challenges: The interception of data transferred between SRPS and central servers
leading to privacy breaches.
• Potential solutions: Implementing end-to-end encryption and robust authentica-
tion mechanisms.
• Practical implications: Enhancing data privacy will build trust among users and
foster broader acceptance of SRPS.
2. Safety standards
• Challenges: The existing safety standards are derived from industrial robot frame-
works unsuited for SRPS operating in public spaces with diverse and dynamic
environments.
• Potential solutions: Developing safety standards specifically tailored for SRPS,
emphasising real-time adaptive safety mechanisms.
• Practical implications: Customised safety standards would ensure the safe interac-
tion of SRPS with humans, mitigating risks and preventing accidents.
3. Ethical considerations
• Challenges: The potential for SRPS to be used unethically, such as for surveillance
or influencing user behaviour subtly.
• Potential solutions: Establishing ethical guidelines that dictate the operations of
SRPS, including transparency in data usage and respecting user autonomy.
• Practical implications: Addressing ethical concerns would foster a responsible
deployment of SRPS, safeguarding individual and societal values.
4. Legal frameworks
• Challenges: The current legal frameworks are inadequate to address the unique
challenges posed by SRPS, including liability issues in case of malfunctions
or accidents.
• Potential solutions: Crafting comprehensive legal frameworks that outline the
responsibilities and liabilities associated with deploying SRPS.
• Practical implications: Legal frameworks would provide a clear pathway for
accountability, promoting responsible innovation and deployment of SRPS.
Sensors 2023, 23, 8056 24 of 29
• Unanswered Question: How can this framework serve as a benchmark for evaluat-
ing the security readiness of SRPS before their deployment?
5. Ethical and Regulatory Implications
• Research Area: Exploring the legal landscape and the ethical dimensions govern-
ing the deployment and operation of SRPS.
• Unanswered Question: What new laws or amendments are needed to address the
unique challenges posed by SRPS, and how can they safeguard individual and
community well-being?
6. User Experience and Human–Robot Interaction
• Research Area: Investigating the psychological impact of SRPS on individuals and
communities, focusing on user experience design and accessibility.
• Unanswered Question: How can SRPS be designed to enhance user experience
while mitigating potential negative psychological impacts?
By venturing into these areas, future research can build upon the current study’s
findings, fostering a secure, ethical, and effective integration of social robots into our public
spaces while promoting individual and collective well-being. Each direction provides
a pathway to delve deeper into the nuances of SRPS security, addressing unresolved
questions and paving the way for groundbreaking advancements in the field.
6. Conclusions
As social robots become increasingly prevalent in public spaces, ensuring their se-
cure and ethical operation is paramount. This study systematically analysed the security
aspects of SRPS, shedding light on the multi-faceted challenges and the requisite consid-
erations for their deployment. Our findings indicate that the development and operation
of SRPS require a trans-disciplinary approach, integrating physical safety, data protec-
tion, communication security, human interaction protocols, and ethical considerations,
among other aspects.
This research highlights several pivotal findings related to SRPS. Firstly, there is an
urgent demand for modernised safety standards specific to SRPS, as existing ones based on
industrial robots are inadequate. Data privacy is paramount, necessitating strict adherence
to laws like the GDPR and robust encryption. The study stresses the importance of ethical
interaction protocols, especially with vulnerable groups, grounded in societal norms and
emphasising empathy in robot design. Additionally, SRPS must be environmentally aware
and adaptive, ensuring safety and maintaining their integrity. Lastly, the research highlights
the growing ethical debates surrounding SRPS, emphasising the need for these robots to
uphold human rights and clear ethical guidelines.
This study lays the groundwork for diverse avenues in the domain of SRPS. Areas for
exploration include examining the influence of cultural differences on SRPS acceptance,
creating standardised testing and validation protocols for SRPS, and delving into real-world
case studies to understand practical challenges. Additionally, there is a pressing need to
develop a comprehensive security framework for SRPS that addresses cybersecurity, data
privacy and ethics concerns [70]. Future research should also assess the legal and ethical
regulations governing SRPS, investigate user experience design, and the psychological
impacts of SRPS on communities. Such endeavours aim to ensure that social robots are
introduced securely and ethically into public spaces, enriching human life while prioritising
individual and societal well-being.
Finally, this paper underlines the vital importance of a proactive, integrated, and holis-
tic approach to the security and ethical management of SRPS. As we stand on the cusp of a
new era in human–robot interaction, the guidelines and themes highlighted in this study
serve as a foundational framework for the responsible development and deployment of
these promising technologies. By embracing these guidelines with a commitment to ongo-
ing refinement and adaptability, developers and operators can ensure that SRPS not only
Sensors 2023, 23, 8056 26 of 29
enriches our public spaces but does so in a manner that is secure, respectful of individual
rights, and aligned with society’s broader safety and well-being.
Supplementary Materials: The following supporting information can be downloaded at: https://
data.mendeley.com/datasets/5xx2wccn7p/1, Security Aspects Repository.xlsx; SMS Data Extraction
Checklist.docx.
Author Contributions: Conceptualisation, S.O.O., Y.Z.A., F.S. and H.R.; methodology, S.O.O., Y.Z.A.
and F.S.; software, S.O.O.; validation, Y.Z.A., F.S. and H.R.; formal analysis, S.O.O.; investigation,
S.O.O., F.S. and H.R.; resources, Y.Z.A.; data curation, S.O.O.; writing—original draft preparation,
S.O.O. and F.S.; writing—review and editing, H.R. and Y.Z.A.; visualisation, S.O.O.; supervision,
Y.Z.A.; project administration, Y.Z.A.; funding acquisition, Y.Z.A. All authors have read and agreed
to the published version of the manuscript.
Funding: This research was funded by the Norwegian Research Council under the SecuRoPS project
“User-centered Security Framework for Social Robots in Public Spaces” with project code 321324.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data presented in this study are openly available in Mendeley data
at Supplementary Materials.
Conflicts of Interest: The authors declare no conflict of interest.
• Reported Security Principles: Any principles or guidelines related to security that are
discussed in the study.
References
1. Mubin, O.; Ahmad, M.I.; Kaur, S.; Shi, W.; Khan, A. Social Robots in Public Spaces: A Meta-review. In Proceedings of the Social
Robotics, Qingdao, China, 28–30 November 2018; Lecture Notes in Computer Science; Ge, S.S., Cabibihan, J.J., Salichs, M.A.,
Broadbent, E., He, H., Wagner, A.R., Castro-González, Á., Eds.; pp. 213–220. [CrossRef]
2. Aymerich-Franch, L.; Ferrer, I. Social Robots as a Brand Strategy. In Innovation in Advertising and Branding Communication;
Routledge: London, UK, 2020.
3. Kyrarini, M.; Lygerakis, F.; Rajavenkatanarayanan, A.; Sevastopoulos, C.; Nambiappan, H.R.; Chaitanya, K.K.; Babu, A.R.;
Mathew, J.; Makedon, F. A Survey of Robots in Healthcare. Technologies 2021, 9, 8. [CrossRef]
4. Guggemos, J.; Seufert, S.; Sonderegger, S.; Burkhard, M. Social Robots in Education: Conceptual Overview and Case Study of Use.
In Orchestration of Learning Environments in the Digital World; Cognition and Exploratory Learning in the Digital Age; Ifenthaler,
D., Isaías, P., Sampson, D.G., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 173–195. [CrossRef]
5. Oruma, S.O.; Sánchez-Gordón, M.; Colomo-Palacios, R.; Gkioulos, V.; Hansen, J.K. A Systematic Review on Social Robots in
Public Spaces: Threat Landscape and Attack Surface. Computers 2022, 11, 181. [CrossRef]
6. Sarrica, M.; Brondi, S.; Fortunati, L. How Many Facets Does a “Social Robot” Have? A Review of Scientific and Popular Definitions
Online. Inf. Technol. People 2019, 33, 1–21. [CrossRef]
7. Golubchikov, O.; Thornbush, M. Artificial Intelligence and Robotics in Smart City Strategies and Planned Smart Development.
Smart Cities 2020, 3, 1133–1144. [CrossRef]
8. Ayele, Y.Z.; Chockalingam, S.; Lau, N. Threat Actors and Methods of Attack to Social Robots in Public Spaces. In Proceedings
of the HCI for Cybersecurity, Privacy and Trust, Copenhagen, Denmark, 23–28 July 2023; Lecture Notes in Computer Science;
Moallem, A., Ed.; pp. 262–273. [CrossRef]
9. Morales, C.G.; Carter, E.J.; Tan, X.Z.; Steinfeld, A. Interaction Needs and Opportunities for Failing Robots. In Proceedings of the
2019 on Designing Interactive Systems Conference, New York, NY, USA, 17–19 August 2019; DIS ’19, pp. 659–670. [CrossRef]
10. Oruma, S.O.; Petrovic, S. Security Threats to 5G Networks for Social Robots in Public Spaces: A Survey. IEEE Access 2023,
11, 63205–63237. [CrossRef]
11. Boada, J.P.; Maestre, B.R.; Genís, C.T. The Ethical Issues of Social Assistive Robotics: A Critical Literature Review. Technol. Soc.
2021, 67, 101726. [CrossRef]
12. Fosch-Villaronga, E.; Mahler, T. Cybersecurity, Safety and Robots: Strengthening the Link between Cybersecurity and Safety in
the Context of Care Robots. Comput. Law Secur. Rev. 2021, 41, 105528. [CrossRef]
13. Mayoral-Vilches, V. Robot Cybersecurity, a Review. Int. J. Cyber Forensics Adv. Threat. Investig. 2021, 20. [CrossRef]
14. Mavrogiannis, C.; Baldini, F.; Wang, A.; Zhao, D.; Trautman, P.; Steinfeld, A.; Oh, J. Core Challenges of Social Robot Navigation:
A Survey. ACM Trans. Hum.-Robot Interact. 2023, 12, 36:1–36:39. [CrossRef]
15. Lutz, C.; Schöttler, M.; Hoffmann, C.P. The Privacy Implications of Social Robots: Scoping Review and Expert Interviews. Mob.
Media Commun. 2019, 7, 412–434. [CrossRef]
16. Andraško, J.; Mesarčík, M.; Hamul’ák, O. The Regulatory Intersections between Artificial Intelligence, Data Protection and Cyber
Security: Challenges and Opportunities for the EU Legal Framework. AI Soc. 2021, 36, 623–636. [CrossRef]
17. Baisch, S.; Kolling, T.; Schall, A.; Rühl, S.; Selic, S.; Kim, Z.; Rossberg, H.; Klein, B.; Pantel, J.; Oswald, F.; et al. Acceptance of
Social Robots by Elder People: Does Psychosocial Functioning Matter? Int. J. Soc. Robot. 2017, 9, 293–307. [CrossRef]
18. Recchiuto, C.T.; Sgorbissa, A. A Feasibility Study of Culture-Aware Cloud Services for Conversational Robots. IEEE Robot. Autom.
Lett. 2020, 5, 6559–6566. [CrossRef]
19. Vulpe, A.; Crăciunescu, R.; Drăgulinescu, A.M.; Kyriazakos, S.; Paikan, A.; Ziafati, P. Enabling Security Services in Socially
Assistive Robot Scenarios for Healthcare Applications. Sensors 2021, 21, 6912. [CrossRef] [PubMed]
20. Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for Conducting Systematic Mapping Studies in Software Engineering: An
Update. Inf. Softw. Technol. 2015, 64, 1–18. [CrossRef]
21. Petersen, K.; Feldt, R.; Mujtaba, S.; Mattsson, M. Systematic Mapping Studies in Software Engineering. In Proceedings of the 12th
International Conference on Evaluation and Assessment in Software Engineering (EASE), Bari, Italy, 26–27 June 2008. [CrossRef]
22. Kitchenham, B. Procedures for Performing Systematic Reviews. In Keele University Technical Report TR/SE-0401; Keele University:
Newcastle, UK, 2004; Volume 33, pp. 1–26.
23. Kitchenham, B.; Brereton, P. A Systematic Review of Systematic Review Process Research in Software Engineering. Inf. Softw.
Technol. 2013, 55, 2049–2075. [CrossRef]
24. Weidt, F.; Rodrigo, S. Systematic Literature Review in Computer Science-a Practical Guide; Technical Report RelaTeDCC 002/2016;
Federal University of Juiz de Fora: Juiz de Fora, Brazil, 2016.
25. Oruma, S.O.; Ayele, Y.; Sechi, F.; Rødsehol, H. Supplementary Materials to “Security Aspects of Social Robots in Public Spaces: A
Systematic Mapping Study”—Mendeley Data. Mendeley Data 2023. [CrossRef]
26. Ampatzoglou, A.; Bibi, S.; Avgeriou, P.; Verbeek, M.; Chatzigeorgiou, A. Identifying, Categorizing and Mitigating Threats to Validity
in Software Engineering Secondary Studies. Inf. Softw. Technol. 2019, 106, 201–230. [CrossRef]
Sensors 2023, 23, 8056 28 of 29
27. Sjøberg, D.I.; Bergersen, G.R. Improving the Reporting of Threats to Construct Validity. In Proceedings of the 27th International
Conference on Evaluation and Assessment in Software Engineering, Oulu, Finland, 14–16 June 2023; EASE ’23, pp. 205–209.
[CrossRef]
28. Liu, X.; Ge, S.S.; Zhao, F.; Mei, X. A Dynamic Behavior Control Framework for Physical Human-Robot Interaction. J. Intell. Robot.
Syst. 2020, 101, 14. [CrossRef]
29. Farina, M.; Zhdanov, P.; Karimov, A.; Lavazza, A. AI and Society: A Virtue Ethics Approach. AI Soc. 2022, 1–14. [CrossRef]
30. Marchang, J.; Di Nuovo, A. Assistive Multimodal Robotic System (AMRSys): Security and Privacy Issues, Challenges, and
Possible Solutions. Appl. Sci. 2022, 12, 2174. [CrossRef]
31. Sharkey, A.; Sharkey, N. We Need to Talk about Deception in Social Robotics! Ethics Inf. Technol. 2021, 23, 309–316. [CrossRef]
32. Cerrudo, C. Hacking Robots Before Skynet. In Cybersecurity Insights; IOActive Inc.: Seattle, WA, USA, 2017; pp. 1–17.
33. Wachter, S.; Mittelstadt, B.; Floridi, L. Transparent, Explainable, and Accountable AI for Robotics. Sci. Robot. 2017, 2, eaan6080.
[CrossRef]
34. Lin, P.C.; Yankson, B.; Chauhan, V.; Tsukada, M. Building a Speech Recognition System with Privacy Identification Information
Based on Google Voice for Social Robots. J. Supercomput. 2022, 78, 15060–15088. [CrossRef]
35. Krupp, M.M.; Rueben, M.; Grimm, C.M.; Smart, W.D. Privacy and Telepresence Robotics: What Do Non-scientists Think? In
Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY,
USA, 6–9 March 2017; HRI ’17, pp. 175–176. [CrossRef]
36. Rueben, M.; Bernieri, F.J.; Grimm, C.M.; Smart, W.D. Framing Effects on Privacy Concerns about a Home Telepresence Robot. In
Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 6–9 March 2017;
HRI ’17, pp. 435–444. [CrossRef]
37. Abate, A.F.; Barra, P.; Bisogni, C.; Cascone, L.; Passero, I. Contextual Trust Model with a Humanoid Robot Defense for Attacks to
Smart Eco-Systems. IEEE Access Pract. Innov. Open Solut. 2020, 8, 207404–207414. [CrossRef]
38. Silva, J.R.; Simão, M.; Mendes, N.; Neto, P. Navigation and Obstacle Avoidance: A Case Study Using Pepper Robot. In Proceedings
of the IECON 2019—45th Annual Conference of the IEEE Industrial Electronics Society, Lisbon, Portugal, 14–17 October 2019;
Volume 1, pp. 5263–5268. [CrossRef]
39. Barra, P.; Bisogni, C.; Rapuano, A.; Abate, A.F.; Iovane, G. HiMessage: An Interactive Voice Mail System with the Humanoid
Robot Pepper. In Proceedings of the 2019 IEEE International Conference on Dependable, Autonomic and Secure Computing,
International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing,
International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), Fukuoka, Japan,
5–8 August 2019; pp. 652–656. [CrossRef]
40. Abate, A.F.; Bisogni, C.; Cascone, L.; Castiglione, A.; Costabile, G.; Mercuri, I. Social Robot Interactions for Social Engineering:
Opportunities and Open Issues. In Proceedings of the 2020 IEEE International Conference on Dependable, Autonomic and Secure
Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data
Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech),
Calgary, AB, Canada, 17–22 August 2020; pp. 539–547. [CrossRef]
41. Poulsen, A.; Fosch-Villaronga, E.; Burmeister, O.K. Cybersecurity, Value Sensing Robots for LGBTIQ+ Elderly, and the Need for
Revised Codes of Conduct. Australas. J. Inf. Syst. 2020, 24, 1–16. [CrossRef]
42. Clark, G.W.; Doran, M.V.; Andel, T.R. Cybersecurity Issues in Robotics. In Proceedings of the 2017 IEEE Conference on Cognitive
and Computational Aspects of Situation Management (CogSIMA), Savannah, GA, USA, 27–31 March 2017; pp. 1–5. [CrossRef]
43. Cresswell, K.; Cunningham-Burley, S.; Sheikh, A. Health Care Robotics: Qualitative Exploration of Key Challenges and Future
Directions. J. Med. Internet Res. 2018, 20, e10410. [CrossRef]
44. Fosch-Villaronga, E. Robots, Healthcare, and the Law: Regulating Automation in Personal Care, 1st ed.; Routledge: London, UK, 2019.
[CrossRef]
45. Fosch-Villaronga, E.; Felzmann, H.; Ramos-Montero, M.; Mahler, T. Cloud Services for Robotic Nurses? Assessing Legal and
Ethical Issues in the Use of Cloud Services for Healthcare Robots. In Proceedings of the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 290–296. [CrossRef]
46. Fosch-Villaronga, E.; Millard, C. Cloud Robotics Law and Regulation: Challenges in the Governance of Complex and Dynamic
Cyber–Physical Ecosystems. Robot. Auton. Syst. 2019, 119, 77–91. [CrossRef]
47. Poulsen, A.; Burmeister, O.K.; Kreps, D. The Ethics of Inherent Trust in Care Robots for the Elderly. In This Changes Everything—
ICT and Climate Change: What Can We Do? IFIP Advances in Information and Communication Technology; Kreps, D., Ess, C.,
Leenen, L., Kimppa, K., Eds.; Springer Nature Switzerland AG: Cham, Switzerland, 2018; pp. 314–328. [CrossRef]
48. Bryson, J.J.; Diamantis, M.E.; Grant, T.D. Of, for, and by the People: The Legal Lacuna of Synthetic Persons. Artif. Intell. Law 2017,
25, 273–291. [CrossRef]
49. Zhang, Y.; Qian, Y.; Wu, D.; Hossain, M.S.; Ghoneim, A.; Chen, M. Emotion-Aware Multimedia Systems Security. IEEE Trans.
Multimed. 2019, 21, 617–624. [CrossRef]
50. Saunderson, S.P.; Nejat, G. Persuasive Robots Should Avoid Authority: The Effects of Formal and Real Authority on Persuasion
in Human-Robot Interaction. Sci. Robot. 2021, 6, eabd5186. [CrossRef] [PubMed]
51. Schneider, S.; Liu, Y.; Tomita, K.; Kanda, T. Stop Ignoring Me! On Fighting the Trivialization of Social Robots in Public Spaces.
Acm Trans.-Hum. Interact. 2022, 11, 11:1–11:23. [CrossRef]
Sensors 2023, 23, 8056 29 of 29
52. Giansanti, D.; Gulino, R.A. The Cybersecurity and the Care Robots: A Viewpoint on the Open Problems and the Perspectives.
Healthcare 2021, 9, 1653. [CrossRef] [PubMed]
53. Gordon, J.S. Building Moral Robots: Ethical Pitfalls and Challenges. Sci. Eng. Ethics 2020, 26, 141–157. [CrossRef]
54. Miller, J.; Williams, A.B.; Perouli, D. A Case Study on the Cybersecurity of Social Robots. In Proceedings of the Companion of the
2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; HRI ’18, pp. 195–196.
[CrossRef]
55. Akalin, N.; Kristoffersson, A.; Loutfi, A. The Influence of Feedback Type in Robot-Assisted Training. Multimodal Technol. Interact.
2019, 3, 67. [CrossRef]
56. Akalin, N.; Kristoffersson, A.; Loutfi, A. Evaluating the Sense of Safety and Security in Human–Robot Interaction with Older People.
In Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction; Human–Computer Interaction Series; Korn, O.,
Ed.; Springer International Publishing: Cham, Switzerland, 2019; pp. 237–264. [CrossRef]
57. Akalin, N.; Kiselev, A.; Kristoffersson, A.; Loutfi, A. An Evaluation Tool of the Effect of Robots in Eldercare on the Sense of Safety
and Security. In Proceedings of the Social Robotics, Tsukuba, Japan, 22–24 November 2017; Lecture Notes in Computer Science;
Kheddar, A., Yoshida, E., Ge, S.S., Suzuki, K., Cabibihan, J.J., Eyssel, F., He, H., Eds.; Springer Nature Switzerland AG: Cham,
Switzerland, 2017; pp. 628–637. [CrossRef]
58. Randall, N.; Sabanovic, S.; Milojevic, S.; Gupta, A. Top of the Class: Mining Product Characteristics Associated with Crowdfunding
Success and Failure of Home Robots. Int. J. Soc. Robot. 2022, 14, 149–163. [CrossRef]
59. Mazzeo, G.; Staffa, M. TROS: Protecting Humanoids ROS from Privileged Attackers. Int. J. Soc. Robot. 2020, 12, 827–841.
[CrossRef]
60. Breiling, B.; Dieber, B.; Schartner, P. Secure Communication for the Robot Operating System. In Proceedings of the 2017 Annual
IEEE International Systems Conference (SysCon), Montreal, QC, Canada, 24–27 April 2017; pp. 1–6. [CrossRef]
61. Dieber, B.; Breiling, B.; Taurer, S.; Kacianka, S.; Rass, S.; Schartner, P. Security for the Robot Operating System. Robot. Auton. Syst.
2017, 98, 192–203. [CrossRef]
62. Dieber, B.; Kacianka, S.; Rass, S.; Schartner, P. Application-Level Security for ROS-based Applications. In Proceedings of the 2016
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016;
pp. 4477–4482. [CrossRef]
63. Chatterjee, S.; Chaudhuri, R.; Vrontis, D. Usage Intention of Social Robots for Domestic Purpose: From Security, Privacy, and
Legal Perspectives. Inf. Syst. Front. 2021, 1–16. [CrossRef]
64. Chatterjee, S. Impact of AI Regulation on Intention to Use Robots: From Citizens and Government Perspective. Int. J. Intell.
Unmanned Syst. 2019, 8, 97–114. [CrossRef]
65. Pagallo, U. The Impact of Domestic Robots on Privacy and Data Protection, and the Troubles with Legal Regulation by Design. In
Data Protection on the Move: Current Developments in ICT and Privacy/Data Protection; Law, Governance and Technology Series;
Gutwirth, S., Leenes, R., De Hert, P., Eds.; Springer: Dordrecht, The Netherlands, 2016; pp. 387–410. [CrossRef]
66. Salvini, P.; Paez-Granados, D.; Billard, A. On the Safety of Mobile Robots Serving in Public Spaces: Identifying Gaps in EN ISO
13482:2014 and Calling for a New Standard. ACM Trans. Hum.-Robot Interact. 2021, 10, 1–27. [CrossRef]
67. 7001-2021—IEEE Standard for Transparency of Autonomous Systems. Available online: https://ieeexplore.ieee.org/document/
9726144 (accessed on 19 September 2023).
68. Winfield, A.F.T.; Booth, S.; Dennis, L.A.; Egawa, T.; Hastie, H.; Jacobs, N.; Muttram, R.I.; Olszewska, J.I.; Rajabiyazdi, F.;
Theodorou, A.; et al. IEEE P7001: A Proposed Standard on Transparency. Front. Robot. AI 2021, 8, 665729. [CrossRef] [PubMed]
69. ISO/TS 15066: 2016 Robots and Robotic Devices—Collaborative Robots. Available online: https://www.iso.org/standard/62996.
html (accessed on 19 September 2023).
70. Oruma, S.O. Towards a User-centred Security Framework for Social Robots in Public Spaces. In Proceedings of the 27th
International Conference on Evaluation and Assessment in Software Engineering, Oulu, Finland, 14–16 June 2023; EASE ’23,
pp. 292–297. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
Cultural acceptance significantly impacts deployment as it influences public openness to interacting with SRPS, affecting their perceived utility and integration. Understanding cultural nuances is essential to tailor robots' behavior, ensuring they meet societal expectations and avoid resistance or rejection by communities .
Cultural and ethical implications include the need for SRPS to respect human rights, maintain privacy, and adhere to social norms. These factors affect human-robot interaction by demanding that robots be designed to avoid emotional manipulation and to behave empathetically, especially when dealing with vulnerable demographics. Failure to consider these implications can lead to reduced public trust and acceptance of SRPS .
The proposed security guidelines for SRPS are specifically tailored to the social and interactive nature of these robots, integrating considerations of cultural, ethical, and privacy aspects that traditional industrial standards do not encompass. This includes actionable measures to ensure that SRPS uphold human rights and privacy, which are critical when robots interact closely with the public in diverse environments .
Current frameworks often overlook the specific needs of SRPS, such as nuanced ethical considerations and context-sensitive privacy issues. Amendments are necessary to create laws that accommodate these robots' abilities to collect data while ensuring personal and community well-being, and that address interaction protocols in public contexts .
Emerging trends include increasing focus on data privacy, cybersecurity, ethical interaction, and human-centric protocol development. These reflect an evolutionary path prioritizing secure and ethical integration of social robots into public spaces, highlighting areas of growth and innovation, and periods of research stagnation which indicate shifts in focus and advancements .
SRPS should be designed with user-friendly interfaces, empathetic interaction protocols, and adaptive behaviors sensitive to user feedback. This design approach can enhance user experience by making interactions intuitive and minimizing negative psychological impacts, such as privacy invasion or intimidation, which enhance user comfort and trust in technology .
The study suggests exploring cultural differences in SRPS acceptance, developing standardized testing protocols, and analyzing real-world deployments as future research directions. These areas are important as they can address existing challenges and gaps, provide insights into diverse societal impacts, and improve the safety and security measures of SRPS, ultimately leading to more effective implementations .
The study provides a foundational resource for policymakers to develop regulations and standards, offers a framework for roboticists to design SRPS with security in mind, and raises public awareness of SRPS risks and benefits. By addressing these aspects, the study aids in informed decision-making and facilitates the secure and ethical integration of social robots into society .
Existing safety and security standards, primarily developed for industrial robots, are inadequate for SRPS due to the unique operational environments and challenges these robots face. These standards fail to comprehensively address the distinct aspects of social interaction, data privacy, and ethical considerations necessary for SRPS in public spaces .
There is an urgent demand because existing standards are primarily industrial-focused and do not address the nuances of social interactions, data privacy, and ethical concerns relevant to SRPS. Key features should include adherence to data protection laws like the GDPR, robust encryption, and empathetic interaction protocols to protect vulnerable groups and maintain public trust .