0% found this document useful (0 votes)
90 views50 pages

PMATT 416212 Ericsson Cloud RAN - PSD - Draft - V1

The Project Scope Document outlines the Ericsson Cloud RAN Certification project, which aims to implement a cloud-native Radio Access Network (RAN) solution using commercial off-the-shelf hardware. The project seeks to achieve minimum viable product (MVP) features by early 2026, transitioning AT&T's network towards more flexible and cost-effective cloud-based architectures. Key components include the installation and certification of the Ericsson Cloud RAN solution, with a focus on interoperability and integration with existing network management systems.

Uploaded by

dnoyb1982
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views50 pages

PMATT 416212 Ericsson Cloud RAN - PSD - Draft - V1

The Project Scope Document outlines the Ericsson Cloud RAN Certification project, which aims to implement a cloud-native Radio Access Network (RAN) solution using commercial off-the-shelf hardware. The project seeks to achieve minimum viable product (MVP) features by early 2026, transitioning AT&T's network towards more flexible and cost-effective cloud-based architectures. Key components include the installation and certification of the Ericsson Cloud RAN solution, with a focus on interoperability and integration with existing network management systems.

Uploaded by

dnoyb1982
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Project Scope Document (PSD)

Ericsson Cloud RAN Certification


PMATT # 416212
PID # Multiple

Version 0.1

Client/Product Owner: Ye Chen/Nicholas Thompson


Client Project Manager/Author: Andrew Wood, Daniela Diefenbach, Ye Chen, Yolius
Diroo
Sponsoring Business Unit: Network

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document (PSD)

Table of Contents

1 EXECUTIVE SUMMARY ....................................................................................3


2 PROJECT / PRODUCT DESCRIPTION ..............................................................5
3 MARKET ASSESSMENT / STRATEGY ............................................................36
4 ENTERPRISE RISK MANAGEMENT / LEGAL .................................................38
5 METRICS / TABLES ........................................................................................45
6 DEFINITION OF TERMS/ABBREVIATIONS/ACRONYMS ..............................46
7 SCOPE DOCUMENT APPROVAL .....................................................................48
8 DOCUMENT VERSION CONTROL ...................................................................50

Page 2 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

1 EXECUTIVE SUMMARY

What is Cloud RAN?


Cloud RAN is a RAN (Radio Access Network) implementation where the Centralized Unit (CU) and Distributed
Unit (DU) RAN functions are virtualized and implemented on general-purpose hardware also known as
Commercial off the shelf (COTS) hardware. Cloud RAN refers to realizing RAN functions over a generic
compute platform instead of a purpose-built hardware platform (separation of the RAN baseband software and
the RAN baseband hardware). This baseband software can run on any capable COTS hardware, with or
without integrated accelerators, utilizing cloud-native tools and processes to manage the software and
hardware. Cloud RAN increases flexibility, cost savings, and the ability to scale the network more efficiently.
When the virtualized functions (vDU and vCU) are implemented in a true cloud-native fashion, the virtual RAN
implementation (vRAN) is also called Cloud RAN. 5G calls for new levels of flexibility in architecting, scaling,
and deploying telecom networks. Cloud technology provides possibilities to complement the existing tried and
trusted technologies in the RAN domain.

Ericsson Cloud RAN Overview


Ericsson Cloud RAN is a cloud-native software solution handling compute functionality in the RAN. Ericsson
Cloud RAN will allow AT&T to seamlessly evolve towards cloud-native technologies and open network
architectures, with the vision to deploy cloud-native networks on any site, any cloud, and any server platform. In
the initial and current solution, Ericsson’s Cloud RAN solution will be deployed on an Ericsson Cloud Native
Infrastructure solution (CNIS). The long-term goal will be to deploy Cloud RAN on a 3rd party CaaS vendor
selected by AT&T.

The Ericsson Cloud RAN solution will follow O-RAN architecture and interfaces (Open Fronthaul, O1, O2). The
Cloud RAN Software consists of two main containerized functions, the vCU and vDU, which will be distributed
geographically. The vCU (Centralized Unit) will be placed at a centralized location (e.g. MTSO), while the vDU
(Distributed Unit) will be located at the cell site or Hub. The COTS HW for the vCU or vDU will be provided by
Dell.

The Cloud RAN solution consists of the full stack Ericsson software, including Cloud RAN and Cloud Native
Infrastructure Solution (CNIS) and also includes interoperability with selected Ericsson radios and EN/NM
Platforms. For Network Element management, the existing ENM platform will be used as an interim EN/NM for
Cloud RAN certification until Ericsson Intelligent Automation Platform (EIAP) supports O1 and is ready to integrate
with Cloud RAN for network and infrastructure management. There will be Tech Dev development work/tooling
automation to support FCAPS for Ericsson Cloud RAN and the infrastructure on which it resides (new inventory,
new alarms, new PM counters etc..). EIAP will then have the capability to provide a single pane of glass correlating
alarms and parameters coming from the infrastructure with alarms and parameters from the application running
on top of CNIS.

Cloud RAN represents a shift in architecture for today’s radio networks. The change in network architecture shall
drive several transport changes to handle the new interfaces and new requirements to make Cloud RAN Solution
work in the most efficient manner. This shall drive Tech Dev development/tooling automation to support the Cloud
RAN + Transport architecture.

The goal of this Project is to deliver the “MVP” features to achieve MVP GA in early 2026 for scale launch. MVP
refers to achieving cloud RAN feature parity with classical RAN and production deployment/testing of the Tech
Dev “Phase 2A” release. Ericsson Cloud RAN is part of the overall RAN Transformation effort, recently kicked

Page 3 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

off by the SMO/EIAP PMATT 415761. The below is an overall Program high-level timeline and the red box
represents the scope for this PMATT.

1.2. Is this project part of the Chairman’s AT&T 2024 Priorities? Which Chai rman’s Priorities does this address?
Yes
1.3. Describe the current situation that makes the implementation of the new product necessary or desirable.
The current AT&T radio access network (including both 5G NR and 4G LTE) is based on purpose-built hardware
from our incumbent RAN vendors (e.g. Ericsson 6651 G3 and 6630 G2 baseband, Nokia FSM4 ABIO/ASIL cards),
and RAN software with no support for open interfaces.

The evolution to 5G networks based on 3GPP standard is the result of continuous improvement of
telecommunications technologies, where each new release has brought enhanced capabilities including
supporting more spectrum, new services such as ultra reliable low latency communication, and air interface
enhancements such as massive MIMO technologies in performance and efficiency. Full 5G capabilities require
more processing power and it is critical to take advantage of the fast evolution of the general COTS hardware
which makes computing much less expensive. It is strategically critical for AT&T to move to cloud-based RAN
platform using COTS HW with full openness. Open RAN advocates for standardization and open interfaces
between RAN devices. A multivendor approach fosters competition, helping to reduce costs and increase
innovation in the industry. Partnerships and standard collaborations spur development, ultimately maximizing the
benefits as cloudification gets introduced from 5G towards the full cloud native 6G era.

Page 4 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

1.4. List and describe corporate-wide drivers, (e.g. Regulatory, Market demand, Customer Service,
Application Rationalization, Automation, etc.).
• Operators have several key needs that their Cloud RAN architecture must cater to. The chief among them
are performance and efficiency and the disaggregation of hardware from software. What is also desirable is
support for an ecosystem of Cloud RAN suppliers who can offer their differentiated algorithms and product
variants on a common cloud infrastructure, which simplifies deployment and operations wherever possible.
This minimizes integration complexity and ensures achieving performance targets. Another benefit comes
from the ability to create more unified and common operations models across all network elements and
vendors and to add an increased level of automation of network operations, thereby optimizing the total cost
of ownership and performance for different deployments. Overall, Cloud RAN offers the potential that
operators increasingly require to run high-performing networks that are flexible, agile, and reliable.

• The ultimate benefit of Cloud RAN is the flexibility and scalability it provides to AT&T. The wide array of
deployment options allows operators to choose hardware and infrastructure that best suit their needs,
budget, and business model. Choosing the right architecture and configuration of hardware and
acceleration can help AT&T reap the full benefits of Cloud RAN. This creates conditions to use software
from the industry’s leading RAN solutions and match the performance of purpose-built hardware (RAN
compute baseband) and software deployments.

2 PROJECT / PRODUCT DESCRIPTION


2.1. Scope
2.1.1. List what is in Scope for this Project.
The scope of this project is to Install, integrate, certify and commercially accept Ericsson Cloud RAN on
Ericsson CNIS. The scope of this PSD covers Ericsson Cloud RAN “MVP” Launch and will include Ericsson
“quarterly” SW releases in 2024/2025, to achieve a Q1 2026 GA.

Page 5 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

The high-level architecture for Ericsson CNIS-based cloud RAN solution and key components are described in
detail later in this section.
Ericsson Cloud RAN Certification and Controlled Introduction:
• Ericsson Cloud RAN solution shall be certified in Redmond Lab and at 3 FFA locations (Dallas, Atlanta
& [Link]) with 10 sites total. Deployment target of Ericsson Cloud RAN for 2024 is 100 sites across
those 3 FFA locations. For FFA, we will only use sites with dRAN configuration and the FDD feature will
be from the classical BBU. The FFA sites shall be provisioned manually (no tech dev “transport”
automation shall be available). Also, we need to confirm that we have D2 MSN already deployed or
planned to be deployed to support these FFA/CI Sites.
• The solution shall initially be certified with ENM for Network Management/FCAPS until EIAP can fully
support O1, at which time the solution will pivot to EIAP.
• After FFA exit, there will be a Controlled Introduction of 1500 additional sites (which will include Ericsson
and Nokia Markets).
Ericsson Cloud RAN Certification Draft Timeline:
ENM - Phase 1

• Lab Entry (ENM): July 2, 2024


• FFA Entry (ENM): October 1, 2024 (10 sites)

Page 6 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

• Start Controlled Introduction Phase 1 (ENM): Q4 2024 (100 sites)


• RFA “Reduced functionality exit” / LGA (ENM): December 13,2024
• Cloud RAN TDD Feature Parity Lab Entry: Q2 2025
• TDD GA Only (ENM) : July 1, 2025
• Start Controlled Introduction Phase 2 (ENM) : Q2 2025 (1500 Sites)
EIAP – Phase 2
• Lab Entry with EIAP (Hybrid A1):
• Lab Entry with EIAP (O1):
• O1 FM support: Q3-2024
• O1 PM support: Q3-2024
• O1 CM support: Q1-2025
• Lab Exit with EIAP (O1): Q2 2025
• Pivot FFA sites from ENM and EIAP – Q2 2025
• FFA Exit (EIAP): July 1, 2025
• Pivot all FFA and CI sites to EIAP: Q4 2025
• Cloud RAN FDD and FDD+TDD Feature Parity/ Cloud RAN FN Lab Entry: Q4 2025
• Complete Controlled Introduction Phase 2: EOY 2025
• GA (MVP): Q1 2026 (Start scale to 12K Sites)
• Full GA: Q4 2026
Certification Ownership:
• Susan Reiger’s team will support the Cloud RAN Lab/FFA Testing.
• Chad Archer’s team will be responsible for the roadmap and cloud RAN application feature testing
for EIAP and Cloud RAN.
• Chad Archer’s team will own certification of the FHS (if applicable) as it is a RAN component for
Cloud RAN
• Adam Loddeke’s team will be responsible for the roadmap and testing of the CaaS

Ericsson Cloud RAN Application SW/Features


• Ericsson Cloud RAN (vCU and vDU) shall be installed, integrated, certified and commercially accepted on
the Ericsson CNIS platform.
• Ericsson G3 BBU will be used as an initial LTE Base for E5 and ENDC with the addition of G4 BBU when
available
• The Initial Ericsson Cloud RAN solution will be “NR” only. LTE on cloud RAN shall be supported in 2027.
• The Ericsson Cloud RAN Solution shall be compatible with the 5G/LTE Classic RAN and support 5G
standalone and non-standalone modes.

Page 7 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

• The below cloud RAN “MVP” features shall be delivered in incremental Ericsson SW Releases (not in initial
2024 lab delivery): the detailed MVP list can be found via [1, 2] below

RAN Cloud RAN TDD Feature Parity 2Q25


Cloud RAN FDD and FDD+TDD Feature
RAN 4Q25
Parity
RAN Cloud RAN FN 4Q25

• To facilitate interworking between the existing Ericsson RAN compute basebands and the Ericsson Cloud
RAN Solution compute components such as CR Dus, Ericsson will reuse the existing a proprietary interface
named E5 between classical BBU and cloud RAN vDU server.
• The E5 solution facilitates AT&T to utilize existing deployed features within the Classic RAN installed base
such as Ericsson Carrier Aggregation and Ericsson Spectrum Sharing and extend these capabilities into the
Solution environment. Carrier aggregation therefore supports across the Classic RAN and Cloud RAN
platform.
• In the Cloud RAN solution, there are 3 main Cloud Native Functions (CNFs) which provide the RAN
functionality (vCU, vDU, and RANS)
1. vCU (Cloud RAN Centralized Unit, Control Plane and User Plane): A vCU instance is composed of
multiple functions in the same location and it will run on CNIS Compact cloud which is the CNIS multi-
node configuration. The vCU is an all-software solution running on x86 COTS hardware that meets the
Ericsson provided requirements. The vCU is responsible for radio controller and packet processing
functions and follows cloud-native design principles. The vCU is separated into CU-CP (Control Plane)
and CU-UP (User Plane) parts, which allows scaling those functions independently.
vCU POD shall be located at a centralized hub location (i.e. MTSO). Around 500 sites can be managed
by a vCU POD Medium and 150 sites can be managed by vCU POD small. vCU dimensioning depends
on the number of NR carriers at the cell site, the cloud RAN deployment scale, and 5ms one-way (10ms
Round Trip) mid-haul latency between the cell sites and the hub location hosted vCU.

2. vDU (Cloud RAN Distributed Unit): The vDU will run on CNIS Single Node configuration. The Cloud
RAN DU (vDU) utilizes RAN algorithms with unified device and UE chipset integration (between the
5G/LTE Ericsson Radio System and Ericsson Cloud RAN System Verified Solution). The CR CU supports
both low and mid-band deployments. The CR DU is optimized for 5G including low and mid-band
frequencies. It also supports both NSA and SA capable devices.
i. The following properties are supported by the vDU:
o Cloud-native implementation of 3GPP-defined gNodeB DU functions
o Container-based components designed to run on COTS hardware and Kubernetes
infrastructure to meet Ericsson provided requirements.
o Interfaces the 3GPP-defined F1 towards gNodeB CU-CP and CU-UP
o L1 like FEC/LDPC process offloading to a selected function HW accelerator.
o Ericsson Cloud RAN System Verified Solution utilizes its own L1 software and
algorithms.
o Supports NR NSA Low-Band FDD
o Supports NR NSA Mid-Band TDD
o Supports NR SA Low-Band FDD
o Supports NR SA Mid-Band TDD
o Interfaces with Ericsson legacy CPRI-based radio units over the Ericsson Radio
Gateway (RGW)

Page 8 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

o Interfaces with Ericsson eCPRI-based radio units


ii. The following functions are supported by the vDU:
o Control and user plane functions of Baseband processing: L2, L1, and RLC and MAC
o Lower parts of traffic control: DU handling of cell and UE, and NR Sector Carrier
o Sector Carrier management
o Logical channel to transport channel mapping
o RLC segmentation
o HARQ
o Admission control
o MAC scheduling
o Optimizing broadcast for each local broadcast area

For this project, the vDU server is expected to be based on Intel Sapphire Rapids EE MCC (32
cores). The lab availability is expected to be early 2024.

3. RANS – RAN Support Discovery: The RANS is a cloud-native software solution that handles service
discovery functionality in the Cloud RAN. The RANS is container-based to run on COTS hardware and
Kubernetes infrastructure that meets the Ericsson provided requirements. It is the cloud-native
implementation of the Cloud RAN service discovery function that is fully integrated with the Ericsson CR
CU, helping the CU-CP to find the right CU-UP.

Ericsson Cloud RAN CNIS Platform/Infrastructure:


The Ericsson Cloud RAN solution will be based on CNIS Compact Cloud and CNIS Single Servers. The
Solution is a cloud-native implementation of 3GPP 5G RAN that enables the following as specified by Ericsson:

• Distributed deployment –driven by Higher Layer Split (HLS) as per 3GPP standard.
• Run on COTS hardware.
• Cloud native RAN Applications that are deployed via an open-source container orchestration
system (based on Kubernetes, here CNIS), supporting automated deployment, scaling, and
management.

The figure below shows the Ericsson CNIS (cloud native Infrastructure) cloud platform native architecture

Page 9 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Figure: Ericsson Cloud Native Infrastructure (CNIS)


• The Ericsson Cloud Native Infrastructure (CNIS) solution is a bare metal Container-as-a-service (CaaS)
infrastructure for cloud native applications.
• In future and target architecture, we may migrate Ericsson Cloud RAN to a 3rd party CaaS vendor and team
will we have to work out the migration path and solution for that (that would be separate PMATT/PSD).
• For dRAN, vDU is at the cell site. vCU placement is at a MSTO (preferred option over Network Cloud
Location). For CNIS itself, CNIS compact cloud will be on the vCU server and CNIS single node will be
vDU server. The OMC is in a centralized location (subset of the vCU locations since OMC POD can
manage up to 2000 sites)
• The Ericsson CNIS solution consists of the following 4 components:

1. Ericsson Cloud Container Distribution (CCD): The Ericsson cloud platform in the Solution is a
Baremetal Container-as-a-service (CaaS – Ericsson CCD) infrastructure optimized for highly
distributed environments. Two different infrastructure options for hosting are available (a) CNIS
Compact for CR CU(s) or (b) CNIS Single Node, the infrastructure required for the CR DU.
2. Ericsson Operations Manager Cloud Infrastructure (OMC): The Ericsson Operations Manager
Cloud Infrastructure (OMC) provides a single pane of glass management of distributed Ericsson
cloud infrastructure deployments and supports fault, performance management, and centralized
log management for Ericsson cloud products. It will also serve as central operation point to trigger
the different LCM operations for the corresponding clusters, being single server or multi server.
3. Ericsson Software Defined Infrastructure (SDI) – It is for vCU using CNIS compact cloud and
management PoD (OMC). It is not applicable to the single node vDU. Ericsson CNIS cloud RAN
solution SDI is built with a telco grade and integrated with other CNIS components with well known
telecom management protocols. The system provides the support for the following functionality
during the entire life-cycle of the equipment
• Discovery and identity
• Inventory
• Boot services
• Multi-vendor server management

Page 10 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

• POD management
• Networking fabric management
• Performance management and fault management
• Upgrade
The smart equipment management controller (SEMC) performs the infrastructure management
function of SDI3 system. The Data Fabric is composed of 2x Extreme SLX8720 switches.
8720 is based on Broadcom™ Trident3 X7 chipset. It has 32 Quad Small Form-Factor Pluggable
28 (QSFP28). Each 8720 provides 32 100 GE ports each of which can be broken out to four 25
GE ports. For out-of-band management, the 8720 provides two 10/100/1000 Mbps RJ-45 ports
for redundancy. The Control Fabric is composed of 2x Ericsson EAM-S switches. The EAM-
S 0201 is a control network switch for the SDI 3 Infrastructure Management. It comes as
integrated system and hosts both the Network Operating System and SDI 3 SEMC Management
functionality. The EAM-S 0201 offers 40 RJ45 1000BASE-T Ethernet ports, which provide cost-
effective cabling to computer systems and data switches. Each control switch is assumed to be
connected to the Customer O&M network via 10GE interface.
4. Ericsson Software Defined Storage (SDS): Ceph hosted storage which provide block and file
storage.

Network Management for Cloud RAN and CNIS– ENM (Interim Solution for Lab and FFA Certification)
• For initial Cloud RAN certification, ENM will be the Network Management System until EIAP can support
“O1” Cloud RAN integration.
• ENM integration with Cloud RAN using EOI (Ericsson Open Interface) shall be supported in the initial
EIAP Q2 2024 Lab Release.
• A1 interface to EIAP for Cloud RAN shall also be supported in initial EIAP Q2 2024 release (Real-Time
RIC to xApps)
• ENM will get FCAPS from vCU and vDU via existing EOI interface and FCAPS and NBI should be the
same as physical Macro RAN FCAPS supported today with ENM.
• The CNIS Infrastructure FCAPS would be supported by same NBI applications that currently support
CNIS platform alarms and metrics for other solution (i.e. we can use same NBI that currently supports
cMME since that resides on CNIS)

Page 11 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

NBI Apps that


support CNIS
ENM EIAP for cMME (FM
and PM)

Figure: Mock-Up of Network Management Interfaces with ENM Interim Solution for July 2024 lab entry.

Network Management for Cloud RAN and CNIS - EIAP (Final End State)

• Ericsson Cloud RAN solution shall interface with EIAP for network management and FCAPS support.
• EIAP platform and initial feature certification is being handled by separate PMATT 415761. EIAP features
for Ericsson Cloud RAN will be delivered as part of the Ericsson C1 and C2 SW Releases (see tables
starting on page 15 in EIAP PSD SMO_RAN Transformation _PSD_ DRAFT _V1.docx ).
• Ericsson Cloud RAN (vCU and vDU) shall be integrated, certified and commercially accepted using EIAP as
the Network Management system for Final End State.
• EIAP shall provide network management for the vDU POD, vCU POD and the Infrastructure Mgmt POD
1. vCU POD = Ericsson cloud infrastructure (CNIS) + vCU
2. vDU POD = Ericsson cloud infrastructure (CNIS) + vDU
3. The Mgmt POD has an equivalent configuration to the vCU POD. Management PODs host OMC,
Ericsson Security Manager (ESM), and Cloud Native Network License Server (cNeLS)
• vCU shall be connected to EIAP via Open O1 or EOI (if needed during transition period)
• EIAP C2 SW drop will have the feature support for EOI interface for Cloud RAN (Q3 2024 to Lab)
• vDU and vCU shall be connected to each other via 3GPP F1 interface.
• EIAP Non-RT RIC (automation platform) shall send policies to vCU via A1 interface. A1 interface to EIAP
for Cloud RAN shall be supported in initial EIAP Q2 2024 (Real-Time RIC to xApps)
• CNIS components will be managed by OMC (Orchestrator Management Cloud) which will report FCAPS
information northbound to EIAP. OMC shall report FCAPS to EIAP and handle LCM use cases via O2.
• EIAP C2 SW drop will have the feature support for O2 interface for Cloud RAN (Q4 2024 to Lab)
• EIAP shall support RAN SW FCAPS with O-RU/3rd Party Radio via Open Fronthaul M-Plane interface.

Page 12 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

License Manager cNeLS


Cloud RANs depend on a License Manager (LM) to retrieve the NeLS Fingerprint-based LKF from a Cloud
Native Network License Server (cNeLS). The LM also reports capacity license usage and software application
information back to cNeLS.
Ericsson will run and manage cNELS external to AT&T’s network.
The cNeLS can automatically acquire Network Wide RAN Licenses through the Ericsson License Gateway
(ELG) or manually obtain them by downloading the LKF from the Electronic License Information System (ELIS)
and installing it on cNeLS. Even if the cNeLS becomes unreachable, the LM on the Cloud RAN can still operate
whether license consumers on Central Units (CUs) and Distributed Units (DUs) can activate their functionality
based on the cached licenses in the LM's redundant database.

Page 13 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Transport Solution for Ericsson Cloud RAN (Author: Yolius Diroo)


With the evolution of the RAN network to Cloud RAN and the segregation of the BBU function into vCU and
vDU, as defined earlier in this document, there will be a requirement to support a new Transport interface
referred to as Midhaul. Therefore, Midhaul will be the new interface between the vCU and vDU.

The placement of vCU and vDU is important to transport as the Midhaul latency requirement is to be less than 5
milliseconds one way (10ms RTT – Round Trip Time). Currently, there are three solutions proposed to meet the
latency requirement and they are referred to in this document as Phase 2A and 2B.

There are two flavors of 2A proposal as follows and are demonstrated in more detail later in this document.
• vDU at the Cellsite and vCU at the MTSO. This option is defined as dRAN
• vDU at the CO/Hub and vCU at the MTSO. This option is defined as cRAN

2B proposal consists of one flavor and that is to deploy both vDU and vCU at the CO/Hub location. This option
is defined as cRAN.

The other important aspect to consider is the TOR (Top-of-Rack) switch interface with J2 D2 MSN. But before
discussing the Cloud RAN architecture, it is good to define some of the possible xHaul connectivity in Cloud
RAN design.

The Figure above depicts different xHaul connectivity. Not all these may be applicable to Cloud RAN
but generally, xHaul refers to any type of Haul such as Backhaul, Shorthaul, Sidehaul, Fronthaul, and
Midhaul. The attempt here is to visually define the xHaul and possible interfaces. For example:

Backhaul connectivity:
• SIAD to the MSN
• MSN to MSN
• Could also be over DaFi DWDM

Shorthaul connectivity:
• SIAD to BBU
• DWDM Mux
• Could also be over DaFi DWDM

Page 14 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Sidehaul connectivity:
• BBU to BBU
• Could also be over DaFi DWDM Mux

Fronthaul connectivity:
• BBU to the Radio
• Could also be over DaFi DWDM Mux

Midhaul connectivity
• vDU to vCU
• Could be over SIAD to MSN as part of Backhaul
• Could also be over Dafi DWDM Mux
For more details on the requirements for the xHaul please see section titled “Ericsson Requirements as they
pertain to each Transport Element:”

In the following sections, we will discuss the high-level designs supporting Cloud RAN solutions.

The figure above is the representation of Phase 2A in a dRAN configuration without FHS. As depicted in this
figure, the vDU is hosted at the Cellsite and vCU at the MTSO.

Page 15 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

The figure above is the representation of Phase 2A in a dRAN configuration. As depicted in this figure, the vDU
is hosted at the Cellsite and vCU at the MTSO.

• At the Cellsite, the interface to Transport will be through the SIAD. The classical BBU(s) are already
deployed at the cellsites and are terminated to the SIAD and carrying the Backhaul Traffic to the MTSO. The
interface between the classical BBU and the SIAD will remain as is and there is no requirement to change it.
• At the Cellsite, there is also a newly deployed vDU that will directly terminate to the SIAD. This interface from
the vDU to the SIAD will be referred to as Midhaul and will travers the same EVC as the current Backhaul. The
current interface requirement between vDU and the SIAD is a 10G interface.
• The Backhaul will be traditional Switched Ethernet circuit, either ASE (AT&T Switched Ethernet) or 3rd Party
circuit that are leased today as part of dRAN architecture. The minimum Backhaul requirement is 3G EVC
which requires a 10G UNI.
• At the MTSO, J2 MSN POD/Cluster are required to support cloud RAN. One of the main reasons for the need
for J2 MSN is the 100G interface to the TOR. J2 MSN requirement of course means that the cellsite will have to
be migrated from D1 MSN to J2 MSN if not already migrated.
• At the MTSO, access to the QMX is also required for interconnectivity to the Management Fabric also known
as Cloud Native Infrastructure System (CNIS). The interface requirement between the QMX and CNIS will be
10G

Page 16 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

The figure above is also representation of Phase 2A in a dRAN configuration and is strictly focused on depicting
the logical flow. Therefore, as mentioned before, the interface to Transport at the Cellsite will be through SIAD
and in the MTSO through MSN.

• The Classical BBU that are already deployed at the cellsites and terminated to the SIAD will continue to carry
the Backhaul traffic.
• The vDU that will directly terminate to the SIAD will carry the Midhaul traffic.
• The Backhaul from the BBU and the Midhaul from the vDU will travers through the primary EVC via the SIAD
and will be carried over to the MTSO through the Switched Ethernet network, depicted as the gray cloud.
• At the MTSO, both Backhaul and Midhaul traffic will travers through the J2 MSN Access and then to the J2
MSN Hub
• At the J2 MSN Hub, the Backhaul and Midhaul Traffic from the cellsite will be segregated. The Backhaul
Traffic will be routed to the PE routers while the Midhaul Traffic will be routed to the TORs.
• TOR will route the Midhaul Traffic to the vCU to complete the communication interface between the vDU from
the Cellsite. Note that this Midhaul connectivity between the vDU and vCU should not exceed 5 milliseconds of
latency in one direction.
• The vDU at the MTSO will then reroute that original Midhaul traffic back to the TOR as Backhaul.
• TOR will route the Backhaul back to the J2 MSN Hub and J2 MSN Hub to PE.

As mentioned earlier, the Midhaul connectivity has a requirement of <5ms one-way latency which could be
challenging to meet in certain conditions when using a dRAN architecture with Switched Ethernet network. As
such, there are two possible options that may be suitable to meet this challenge. One option is to use a Dark
Fiber solution with DWDM also known as DaFi in the Backhaul/Midhaul. The other option is to use cRAN
instead of dRAN to shorten the Midhaul distance to meet the latency requirements. The next few diagrams will
depict these options and describe the solution.

Page 17 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

The figure above depicts one of the options to overcome the one-way latency of <5ms for Midhaul. As
depicted in the above figure, DaFi with Dark Fiber is used between the Cellsite and the MTSO.

From the Cellsite perspective, this scenario is similar to the previous diagram depicting dRAN configuration
except that the D2 SIAD is connected to the DaFi now and therefore DaFi Transport will be used to carry
the Midhaul and Backhaul traffic to the nearest MTSO. On the other hand, from MTSO perspective the
design will be different. From example, in the DaFi solution in the MTSO, DaFi will need to terminate to
QMX at 10G interface instead of J2 MSN. Once traffic is terminated to QMX, the traffic flow from Backhaul
and Midhaul perspective will remain similar to what was described previously.

Page 18 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

The figure above depicts another option to overcome the one-way latency of <5ms for Midhaul. This option,
unlike the previous options, is a configuration in a cRAN design as opposed to the previous dRAN design.

In the cRAN design DaFi with Dark Fiber is used between the Callsite and the cRAN Hub location. In this
scenario, DaFi will carry the Fronthaul traffic from the radios and the Shorthaul traffic from the existing
classical BBU at the Cellsite to the Hub.

In the cRAN design, as depicted in the diagram above, the FHS and the vDUs are deployed in the Hub
location alone with either the D2 SIAD or Remote D2 Access MSN. The deployment option of which device
(D2 SIAD or Remote D2 MSN) at the Hub will be determined later as part of NDR when the design is
finalized.

546
Once again, the Midhaul and Backhaul traffic from the hub to the MTSO will be the same as previously
described.

In both solutions with the DaFi and dark fiber – in the dRAN and cRAN solution, the Midhaul latency will be
reduced. This is due to the fact that in the dRAN design using DaFi with Dark Fiber as a layer 1 solution, it
will provide a point-to-point connection and therefore there is no switching involved that can potentially
impact the latency. In the cRAN design the vDU is deployed in the Hub location which is geographically
much closer to the MTSO where the vCU is deployed as compared to the vDU at the cellsite. therefrom
using switched ethernet circuits in the cRAN soltuion is less likely to impact the Midhaul latency.

Page 19 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

The figure above is the representation of Phase 2B that is in a full scale cRAN configuration. Meaning that the
entire Cloud RAN cluster is now deployed at the Hub location instead of partially in the Hub and partially in the
MTSO.

Similar to the Phase 2A cRAN configuration described previously, Phase 2B will also use DaFi from the cellsite
to the Hub and will carry Fronthaul and Shorthaul traffic. Traffic segregation and processing will take place at
the Hub where the Shorthaul will traverse from DaFi to the D2 SIAD/Remote Access MSN and Fronthaul will
traverse the Cloud RAN Infrastructure. In this configuration, Since TORs are deployed at the Hub location,
100G interfaces to the D2 SIAD/Remote Access MSN will be required.

Once both Fronthaul and Shorthaul are processed at the Hub, it will traverse the Switched Ethernet Network to
the nearest MTSO and terminate into MSN.

In this scenario since both vDU and vCU are collocated at the same Hub location, the Midhaul latency of <5ms
will no longer be of any concern.

\Timing Synch: For the MVP deployment, the BBU will be used as an interim solution for Primary Timing
Synch. There may be an alternative timing synch solution with the long-term architecture (beyond Q1 2026)

Fronthaul Switch (FHS): For the initial 100 sites, the Cloud RAN architecture will not include FHS, however
FHS will be incorporated in a future phase

Technical assessment on the options is ongoing with AT&T internal key stakeholders and Ericsson. The teams
will reach the conclusion for the MVP sync solution and document in NDR. Meanwhile, work is in progress with
the server vendors to add the sync capability for the nextgen vDU similar as the classical BBU. The long-term
architecture (to be used for scale beyond Q1 2026) will be outside the MVP feature from a Cloud RAN
deployment perspective but will be input into Tech Dev development for Phase 2A and Phase 2B.

Ericsson Requirements as they pertain to each Transport Element:

• Fronthaul: Fronthaul will generally consist of 10G or 25G flows, which require dedicated bandwidth.
Multiplexing of multiple fronthaul connections scales very quickly to 100G or 200G, and for large sites with
many radios could reach 400G in the very near future. Fronthaul latencies are limited due to radio
performance and synchronization requirements and radio feature needs. Maximum one-way fronthaul
latencies of 75μs are the limit today. The equipment should be capable of eCPRI multiplexing to reduce
fiber interconnect needs between the equipment and the vDU, whether that be a single server or a vDU
server cluster. For scenario where we need to host the 5G NR FDD carrier with CPRI legacy radio, there
are two options for the CPRI to eCRPI conversion: the less expensive option using radio gateway and
second option is to use Ericsson Fronthaul switch (FHS) with CPRI line card. The decision between two
options will depend on if there are additional needs such as synchronization and switching function at the
cell site.
• Sidehaul: Sidehaul will generally consist of 10G and 25G interfaces interconnected through a layer-2
fabric. For site-to-site sidehaul interconnect, this can scale to 75G-100G. Sidehaul requirements allow for
up to 175μs one-way latency. The equipment should be able to provide E5 interconnect for NR ARC
features such as Carrier Aggregation and LTE ERAN features like CA and Uplink CoMP. This generally

Page 20 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

requires 10G and/or 25G interfaces for each baseband and vDU instance. It should also be able to provide
additional port connectivity to allow small cell CRAN nodes to gain ARC and ERAN interconnect with macro
sites.
• Midhaul: Midhaul is in the 1G-10G range in throughput with scaling toward 40G for very dense vDU
clusters toward the vCU. Latencies 5 one-way (10 ms round trp) are acceptable on the midhaul
connections. The equipment should have the capacity to serve midhaul interfaces toward the vCU clusters
in the network.
• Backhaul: Backhaul will generally scale from 10G-40G with some parts of the network scaling toward 100G
or more based on network site density and subscriber density. Backhaul latencies are expected to be 10ms
or less one-way. For backhaul traffic, several capabilities in the equipment should be considered to optimize
the network interconnect and manageability for the transport, as well as allowing the multiplexing of
backhaul traffic with traffic from other domains.

Tech Dev Impacts for Ericsson Cloud RAN


• Tech Dev development to support Ericsson Cloud RAN Transport automation and NM FCAPS will be split
into 2 Tech Dev Phases (Phase 2a and 2b)
• Phase 2A Scope: 2 scope components:
1. RAN FCAPS: Tech Dev NBI will interface with EIAP to support Ericsson Cloud RAN FCAPS (FM,
PM, CM, Security) along with FCAPS for the CNIS Infrastructure.
2. Transport automation: Phase 2a shall support MVP Transport Features for Q1 2026 GA for
Transport automation (i.e. Inventorying and automated IP assignment). The MVP Use Cases are
depicted below for DRAN and CRAN configuration for inclusion in Phase 2A tech dev work:

• Phase 2B Scope: This will be incremental Tech Dev release to support scale and to support Phase 2B
configurations for CRAN as depicted in diagram above. The full Scope for 2B has not been defined yet and
Phase 2B full deployment is not a requirement for Q1 2026 GA. Phase 2B development will start in Q3 2025
and should map back to this Project/PMATT.

• High-Level Tech Dev Requirements for “Phase 2a” (Filter by Phase 2A on “TD Scope and Timeline”
tab) Sharepoint Link:
RAN Transformation_Tech Dev Working Matrix_Action [Link]
EIAP Use Cases _V3.pptx

Page 21 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

• Tech Dev Impact for Cloud RAN on ENM interim solution for certification:
o Tech Dev shall support incremental PM, CM, FM for the Cloud RAN along with support of new
inventory templates. FCAPS shall be sent to Tech Dev NBI via existing ENM interface.
o Tech Dev shall support FCAPS from the CNIS Infrastructure, leveraging existing Tech Dev Apps
that currently receive FCAPS from CNIS used for other platforms
o The team is proposing using the CD Agile Tech Dev process to support the above and exclude this
from Phase 1 or Phase 2A Tech Dev PI Release

Note: The initial Ericsson Cloud RAN lab entry and FFA entry will not have any Tech Dev automation for
Transport (i.e IP assignment) . The 100 FFA sites and 1500 CI Sites will not have full Tech Dev automation
and Operations support will be required.

Tech dev Timelines for Phase 2A and 2B:

Ericsson Cloud RAN “draft” deployment summary:


CPRI: Common Public Radio Interface. Interface between baseband units (BBU) and remote radio units (RRU
or RRH) through a TDM serial interface
eCPRI: Enhanced CPRI supporting interface between BBU and RRU using ethernet IP Packet based interface.
eCPRI is required for Open RAN.
2024: All Cloud RAN deployments in 2024 will be TDD only using AIR 6449/AIR 6419/AIR 6472 n77 suite
of single and dual band AAS radios @ eCPRI only. The plan is 100 Sites in 2024 as part of Phase 1
Controlled Introduction.

Page 22 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

2025: All Cloud RAN deployments in 2025 will be TDD only using AIR 6449/AIR 6419/AIR 6472 n77 suite of
single and dual band AAS radios @ eCPRI only. The plan is 1500 Sites in 2025 as part of Phase 2
Controlled Introduction.

2026: Cloud RAN deployments in 2026 will be TDD and FDD @ primarily eCPRI only but some CPRI
radios are bound to be included due to exceptions. Once MVP GA is achieved in Q1 2026, plan to start
scale to 12K sites in Q2 2026.
o FDD bands are usually n5, n25, n66.
o B14, B29, and B30 will remain LTE
Note: The Cloud RAN vDU will be similar to the G3 and G4 in that it supports modes or modules. FDD O-
LLS Cat-A (eCPRI) and FDD CPRI (via Radio Gateway) are on separate modes/modules. Due to cost and
vDU capacity, we “may: need to run all FDD NR on one module (CPRI) even though some radios are
eCPRI capable.

Lst what is not in Scope for the project.


cMME Roll-out in 2024/2025 will be a separate PMATT. Request was made to Ericsson PMO to help
coordinate the cMME and Macro RAN Migration to EIAP and Cloud RAN deployment
• Non Ericsson CNIS (3rd Party) CaaS vendor selection for AT&T targeted cloud RAN solution
• Open Fronthaul is not part of this Ericsson CNIS turnkey solution so this will be part of a separate
PSD and PMATT project

2.1.2. Describe the present method of operation (PMO) and future method of operation (FMO). (Include
technological improvements, innovations being incorporated, ordering and provisioning changes envisioned
etc.)
PMO: AT&T does not support a Cloud RAN solution for C-RAN and D-RAN architectures. AT&T
support classical purposed built BBU HW that is integrated with Ericsson ENM for Network
Management (D1 architecture)
FMO: AT&T to deploy Cloud RAN architecture for C-RAN and D-RAN architectures based on Ericsson
CNIS cloud RAN solution and integrate the Cloud network elements into Cloud Based element
manage system called EIAP.
2.1.3. Provide a general description of the history or situation that leads to the recognition this project should be
undertaken.
• Cloud RAN increases flexibility, cost savings, and the ability to scale the network more
efficiently.5G calls for new levels of flexibility in architecting, scaling and deploying telecom
networks. Cloud technology provides possibilities to complement the existing tried and trusted
technologies in the RAN domain.

2.1.4. Is this product or capability like, or does it overlap with an existing service and has it been rationalized within
the product Roadmap? What Products/Services are impacted? Please explain the plan.
• Cloud RAN will drive the replacement of physical CU and DU functions within a BBU deployment and move
the CU and DU functions to Cloud and Cloud Supported Infrastructure

Page 23 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Is this a totally new service/product or an enhancement to an existing service/product? What is the difference
between the enhancement and the current service/product? If based on a like an existing service/product, please
state what that similar or existing service/product is in terms that existing supporting departments/work centers will
be able to relate to for assessment purposes.

2.1.5. Does this replace a manual process?

N/A

2.1.6. Is this a deletion of existing functionality?


No, the CU and DU functions will remain the same (and be enhanced) , they will just be moved to Cloud.
2.1.7. Is this an extension of existing functionality to additional channels?
No
2.1.8. Are you working with a Contract Manager from Global Supply Chain? If so who (ATTUSID)?
• Scott Nylund (Ericsson), Mike Jones (SMO_EIAP))
2.1.9. Identify the technology involved if known {i.e.: Video, Mobility, Cloud, Common Back Bone, Mobility Data, LTE,
Universal Services Platform or other.}

Page 24 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

5G NR RAN system, Ericsson will be deploying Cloud RAN on the Ericsson CNIS Cloud Platform.
2.1.12 Provide a description, diagrams or links to Roadmaps, explaining target architecture if available.
Picture 1: Ericsson Cloud RAN “Target” Logical Architecture

Picture 2: Ericsson Solution Overview – Final End State with EIAP

Page 25 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Answer the questions listed below. If the answer is Yes, please explained and initiate a Certification Project PMATT
Proposal.
[Link]. Is new technology being introduced into AT&T's Network Infrastructure? If yes, please insert or
provide a link to hardware specifications or a detailed description of the technology
Yes, Ericsson CNIS-based cloud RAN system with Ericsson EIAP management system, Refer to Section
2 Product Description
Does this project require an upgrade or change to existing network elements or software (e.g., new
card in an existing frame)? If so, will it affect the Network Element Gold Standards?
Yes, Ericsson CNIS-based cloud RAN system will replace the existing classical RAN system (E.g. Ericsson
classical RAN system deployed on the purposed built hardware such as G3/G4 baseband
[Link]. Is a new configuration or feature required that is not yet certified for use in the network?
Yes
Do Business / Operational processes or Methods & Procedures (M&Ps) need to be developed that could
impact network performance?
Page 26 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Yes
2.1.10. Is end-to-end network testing required?
Yes
2.1.11. Will this product/service require Customer Premise Equipment (CPE).
No
2.1.12. Who is the supplier(s)?
Ericsson for cloud RAN application, CNIS cloud platform and EIAP. DELL or HPE for the COTs Hardware server
[Link]. Does your solution include the shipment by AT&T (or its vendor) of a physical product to the customer?
No
[Link]. Will your solution be sold to a customer located in the state of California?
No
2.1.13. Does this product require off network (voice, data, and/or SMS) usage? If no, describe how the project will
ensure that off network usage will not be incurred.
No
2.1.14. Will the project require interworking with other operators?
No
2.1.15. Does this project include IT like work and do you have an IT Schedule of Authorization?
Yes, this will impact Tech dev

2.1.16. Describe any Mobility device impacts.


None
2.1.17. Describe how the mobile solution approach will be addressed for this product/service.
It is imperative to synergize all mobile application development in the corporation and bring the best-in-class
technologies, platforms, tools, standards, practices, user interfaces and customer experience across current
and emerging mobile platforms and form factors. For more information, please reference the Mobile First
Website.
2.1.18. Does the product/project align with Business Marketing Compliance Considerations?
N/A, this project is driven by Network
2.1.19. Will the project have Multiple Sub-Projects? This project may have sub-project for the various phases

This document does not include future identified Sub-Projects, the document will be required to be updated and
re-baselined.)

Sub-Project PMATT # n/a

Page 27 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Name

Deliverables/Functionality

Type of Work

CDD

Special Considerations

Assumptions

Dependencies

Projects IDs
(PRIME, PRISM, NTD
PMATT)

Bundle
(For MSP)

Release Sub-Project
(For MSP)

2.2. Success Criteria


2.2.1. How will you measure the Success of the project? What reporting is required?
2.3. Assumptions, Risks, Constraints, Dependencies
(Required for Initial Scope Document)

Describe if there are assumptions/dependencies to this project, its sub-projects and/or if other projects/programs
are dependent on this project. Include if there are projects/ sub-projects that need to precede this project. Identify
any external/internal assumptions/dependencies. Please include in the Assumptions section any known Network
Platforms, Work Centers/Work Forces and or Operations Support Systems/Business Support Systems that this
project might utilize based on knowledge/experience of similar projects.

2.3.1. Assumptions

Assumption # Description Source ATTUSID

2 New processes will be developed to deploy this solution while MN1431


we are in the testing/implementation phases

Page 28 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

2.3.1. Assumptions

Assumption # Description Source ATTUSID

3 Cross-functional agreement on any new processes that are MN1431


developed, including Tech Dev

4 Operational tooling and automation

2.3.2. Risks

Risk # Description Source ATTUSID

1 This platform/solution is a new concept to AT&T. We need to MN1431


ensure we do not try to deploy it using existing process.

Ericsson CNIS based cloud RAN system is a new product and it


will take time to get to the features and performance parity
(including stability, reliability) as classical RAN system already
deployed for number of years

2.3.3. Constraints

Constraints # Description Source ATTUSID

2.3.4. Dependencies

Dependency# Description Source ATTUSID

Page 29 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

2.3.4. Dependencies

Dependency# Description Source ATTUSID

2.4. Third Party Vendor


(Required for Initial Scope Document)

2.4.1. Identify if solution will be developed in-house or through partnering with a Third-Party Vendor.
Third-Party vendor
2.4.2. Please name the vendor.
Ericsson
2.5. Client Desired Date (CDD) / High Level Timeline
• Ft Bliss PIZ (Proof of Concept): Targeting June 2024
• Lab Entry (ENM): July 2024
• FFA Entry (ENM): October 2024 (10 sites)
• Cloud RAN TDD Feature Parity Lab Entry: Q2 2025
• FFA Exit (ENM): Q2 2025
• Controlled Introduction on ENM (?): Q2 2025 (1500 Sites)
• Lab Entry with EIAP (O1):
• O1 FM support: Q3-2024
• O1 PM support: Q3-2024
• O1 CM support: Q1-2025
• Lab Exit with EIAP (O1): Q2 2025
• Pivot FFA sites from ENM and EIAP – Q2 2025
• FFA Exit (EIAP): Q3 2025
• Pivot all FFA and CI sites to EIAP: Q4 2025
• Cloud RAN FDD and FDD+TDD Feature Parity/ Cloud RAN FN Lab Entry: Q4 2025
• GA (MVP): Q1 2026 (Scale to 12K Sites by Q2 2026 (cumulative)

2.5.1. Include a screenshot of the project’s Client Desired Date calculation and any associated assumptions from the
Client Desired Date (CDD) Calculator Tool.

Page 30 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

2.5.2. If the CDD Calculator Tool was not utilized, provide the Client Desired Date along with assumptions around its
feasibility, and a description of how the CDD and its level of confidence were calculated.

2.6. Customer Benefits / Objectives


(Required for Initial Scope Document)

2.6.1. List and describe the customer benefits of the project. Describe the value proposition to the customer. For
Product projects, describe the primary buying triggers and factors that drive the purchase decision for each
segment and sub segment listed in the Target Market section.
N/A

2.7. Customer Experience / Voice of the Customer


(Required for Initial Scope Document)

Describe the Customer Experience in terms of the LBGUPS model. (Learn, Buy, Get, Use, Pay, Service/Support).
Design a consistent customer experience based on a common vision across all interactions for any product.
Before addressing each of the LBGUPS sections and questions, the author will click this hyperlink and use the
CX_AX_guideline tool. Find detailed instructions within the tool. The author must select relevant guidelines
based on which experience touchpoints will be impacted by the project’s scope. The guideline tool allows
selection and export of the guidelines. The author will copy and paste the exported guidelines into the
LBGUPS table below.
IMPORTANT: In some cases, the project will be scoped to deliver only a portion of a guideline, so the author
will revise the original statement, however, the revised statement must not deviate from the intention of the
guidelines.
The author must capture the relevant guidelines and refer to these when ideating to define the scope of the
project. In many cases, a project will be improving an existing experience or even building a new experience,
so it is important to drive the project scope to deliver incremental value that moves AT&T closer to the ideal
customer experience. The scope table should reflect this incremental value.

Page 31 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

N/A
2.7.1. Learn (Describe how customers will get information about
the value proposition of the product/offer and how they
will initiate contact with AT&T. How will the Sales team be
trained? (make sure covered in training)

2.7.2. Buy (What expectations does the customer have if they are N/A
placing new orders; change orders (SUPP) or are migrating
from one service to another?)

2.7.3. Get (Define the desired Get experience for the customer N/A
on how they will obtain the product/service, including
status alerts, pre-install requirements, Care support,
management of total solution. Describe any work that
AT&T Employees may need to perform to establish or
fulfill service or note a very similar service that AT&T
Provides and any modifications that will need to be
made. What is the expected interval for customer
fulfillment?)
N/A
2.7.4. Use (Define the desired Use experience for the customer,
including moving service, changing services, reporting
troubles, contract expiration. Who ensures that the
functionality is performing as designed? If the
functionality is not performing as designed who fixes the
problem? When a user of the functionality identifies a
problem, what is the process to capture the issue and fix
the problem?)

2.7.5. Pay (Explain how does the customer access the most N/A
current (month to date) usage data? What bill view
options and download capabilities are available to
customers? Will the product/service be supported by the
standard biller platform?)

2.7.6. Service/Support (Define how does a customer report a Open Question on whether Customer Care
service issue or trouble and how will they be kept needs to distinguish Cloud RAN
informed of status and final issue resolution?) deployment versus Classical BBU
deployment during troubleshooting
(Geolink/RMAP)

2.7.7. Describe operations capabilities needed to ensure that your quality and performance expectations are
achieved.
There should not be any customer impact
2.7.8. What are the expected hours and days of operation for the feature(s) or function (s)?
Page 32 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

24 X 7 X 365
2.7.9. What are the allowable maintenance windows for the function(s)?
BAU
2.7.10. Response times - application loading, screen open and refresh times, etc.
BAU
2.7.11. Processing times – functions, calculations, imports, exports.
BAU
2.7.12. Query and Reporting times – initial loads and subsequent loads.
BAU
2.7.13. Should any functions or data access be limited? If so, to whom and how (user profiles, login authorization,
authentication, manager approvals)? Is it important to track who initiated/completed a function?
2.7.14. Should the users of the functionality be specifically identified in the IVRs, call flows, in GUIs/applications?
No
2.7.15. Are there any notifications that should be provided to the customer? Start of service, changes, overages,
service cancellation, usage monitoring, etc.?
No
2.8. Customer Care Support
2.8.1. List and describe what customer care support is needed to respond to customers’ need.
Customer Care will most likely need to distinguish Cloud RAN versus Classical RAN on RAN Maps used for
troubleshooting (I.e Geolink/RMAP) . Torch impacts are TBD
2.8.2. Describe any impacts to Customer Care, post-installation, to maintain and support customer service issue.
Customer Care will most likely need to distinguish Cloud RAN versus Classical RAN on RAN Maps used for
troubleshooting by technical care reps (I.e Geolink/RMAP) . Torch impacts are TBD
2.9. Consumer Sales Experience
2.9.1 Is information previously entered by the seller or their support team carried forward without need for
rekeying?
N/A
2.9.2 Would this solution require the seller to complete any new fields, forms or separate service orders for the same
solution or site?
NA
2.9.3 Is this a standalone product or intended to be an integrated and/or bundled solution with other products? If
yes, which ones?
N/A
Has sales support been negotiated for the sales tools used for this solution?
N/A

Page 33 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

2.9.4 Are all Sales Segments covered by the solution? If not, what is the timeline to cover all Segments and what is
the rationale to stagger or limit coverage?
N/A
2.9.5 Is the solution to cover E-Rate customers?
No.
2.9.6 Does any Sales facing activity for the solution require a manual process or manual forms? If so describe.
Describe the Sales Experience in terms of the QDPPCOL model. (Qualify, Design, Price, Propose, Contract,
Order, Lifecycle) Describe the approach intended for this project from pre-sale through ordering and lifecycle
sales activities by completing the below chart. (Please engage the Business Sales Experience team for
assistance in completing this section as needed.)
N/A

2.9.8. Qualify (Describe how Sellers will get information to qualify the
product/solution, including inventory, credit checks and address
validation/service availability for all sites in the solution. What fields
must the Seller provide to qualify the solution? How will the Sales team
be trained to understand and sell this solution?)

2.9.9. Design (What are the design/prior requirements and how are those
obtained? What design tools are available? How many types of design
configurations are expected? Do the design tools allow the seller to pass
or upload information to support teams? Do the design tools highlight
and warn about technical limitations and common errors to ensure
supportable customer solutions?)

2.9.10 Price

2.9.11 Propose (Is a proposal template for the solution available or will one
need to be built? Will a sample bill be available for the customer's review
and how will that be obtained and shared with the customer?)

2.9.12 Contract

2.9.13 Order

2.9.14 Lifecycle

Page 34 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

2.10 Sales Support


2.10.1. List and describe what Sales support is needed to respond to Sales team needs or those acting on a Seller’s
behalf.
2.11. Product / Customer Migration
2.11.1. List and describe situations in which products will be developed and deployed with the intent of migrating
customers from a prior product to the one being developed. Address the key elements of the product, sales
channel strategy, and customer migration. Identify any migration scenarios / rules, if customers are expected
to disconnect an existing product and purchase the new product. What is the legacy product or solution this
solution will be replacing? Has this been incorporated into the Migration roadmap?
Classic BBU will be eventually be replaced by Cloud RAN (vCU and vDU) but there is no current timeframe to
complete this replacement. This project is for GA only.
2.12. CLEC
(Required for Initial Scope Document – Wireline only.)
AT&T is required to provide Competitive Local Exchange Carriers (CLECs) and/or Wholesale customers with
parity and/or a meaningful opportunity to compete regarding access to our Operational Support Systems
(OSSs), including front-end interfaces and information contained in back-end systems. Every project that may
impact what CLECs and/or Wholesale customers send, receive, see, or do (e.g., new products, services, or
network applications) must undergo an impact analysis. This includes any changes to existing systems,
processes or procedures.
2.12.1 Does this project impact the CLECs?
N/A
2.13. Training / End User Education / Staffing
2.13.1 List and describe the different types of internal and end user education training that is required and the
anticipated training tools / environments (e.g. OPUS Training Environment) and work groups / organizations
impacted, including groups that will be responsible for maintaining the product / service after life cycle
management transition. Internal training addresses training for AT&T employees supporting the Project
Team and their respective testing activities while end user education addresses training for the customer and
internal AT&T employees supporting operations for the customer.
Training for Operations Tier 2 and Tier 3 Teams

2.14. Back Up & Disaster Recovery


2.14.1 This section is applicable to all new AT&T Network technology hardware additions. All AT&T Network
technology hardware additions are required to be designed, implemented, and maintained to ensure
continuity of critical network and customer functions in the event of a disaster. List the Technology Recovery
Solution (TRS) recommendation provided by the Disaster Recovery Solution Assessment Tool (DRSAT). NOTE:
This project will not progress past DG1 without a Disaster Recovery Solution Assessment Tool (DRSAT)
recommendation or Disaster Recovery Exception (DRE) request. All DRSAT recommendations or DRE requests
will be reviewed by the Network Disaster Recovery (NDR) group. To access the DRSAT or DRE, click here.
TBD

Page 35 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

2.15. Compute and Store


2.15.1 Is there new Compute and Storage infrastructure that will be deployed as part of this project, IE, Bare Metal
servers, SAN/NAS Storage, AIC, VM, etc.?

1. Initial deployment will be on the Ericsson CNIS platform (EIAP will be on AON platform via separate PMATT).
2. # of Locations – TBD
• Are the servers in Data Centers, NTC, CO, SNRC, VHO? NTC
3. # of Servers – TBD
4. # of VM’s - TBD
5. #of AIC Servers – N/A
6. Amount of Storage in GB, TB, or PB? Either per widget or in total. TBD
7. Is there a related VPMO project? TBD
8. If this is an expansion project, who has historically supported these servers? N/A

3 MARKET ASSESSMENT / STRATEGY


3.1 Market Research
3.1.1 List and describe any available primary or secondary research that supports the concept of the
product/service being developed. Also describe any additional market research that may be needed to
validate the product being developed. Will marketing target an audience under the age of 18?
N/A
3.2 Competitive Analysis
3.2.1 List and describe what our competitors are doing that is similar; reference the source of competitive analysis.
List and describe alternative products being offered by other providers, e.g., other RBOCs, other ILECs, CLECs,
cable providers, wireless providers, Interexchange Carriers, by type, by company, and by market as they are
known. Include how this product compares to the products that other providers offer. Additionally, describe
how competitors educate their customers on similar products.
N/A

3.3 Target Market


(Required for Initial Scope Document)

3.3.1. Indicate impacted Geographic Regions, Customer Segments/User Types, Sales/Distribution Channels, and
Work Centers by placing an X in the table below. Briefly describe the offering, key strategic and tactical
elements.

N/A

Target Audience 1) What Geographic Regions are in scope for this project? (check all that may apply)
Europe, Middle East, Africa (EMEA) Domestic US US Territories
Fill this section out Canada
to the best of your
Central America/Latin America (CALA) Most Of World (MOW) Japan
knowledge.
Page 36 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Asian/PAC Other: _______________


Double click in 2) What Sales/Distribution Channels are targeted by this project? (check all that may
checkbox to apply)
select.
Consumer Gov’t – Fed. Gov’t – State/Local
Education (For K-12 Schools, including Tribal & Public Libraries (Must complete
sec. 4.3.5.) Healthcare
Healthcare Home Solutions Prepaid
Wholesale System Integrators (IBM,HP)
Online (e.g., [Link], [Link]) Indirect eCommerce (e.g.,
[Link]) Business Digital Experience (BDE)
Direct Sales Company Owned Retail (COR) Stores Dealers
Retailers (e.g., Best Buy) Resellers Partner Telcos Telesales
Alternate Channels Alliance ACC Business
Small Business Solutions (BIS, SBS, BAS, VCTG) Global Business Sales (SCG,
GCG)
Enterprise Business Sales (CBS, PCG) Emerging Business Markets (EBM)
Other____Care/Save Desk_________ N/A

3) What Work Centers will be impacted? (check all that may apply)
Network Reliability Center (NRC) Global Network Operations Center (GNOC)
Global Customer Support Center Call Centers Sales Support Center
AT&T Telco Mobility Support Center U-Verse Support Center
Other____________________ N/A

3.4. Sales Compensation / Incentives


3.4.1 Will sales and/or customer service reps be paid commission for selling the product/service?
TBD
3.5 Employee Concessions / Incentives / Communications
3.5.1. List and describe employee concession capability and/or employee referral (Prefer 2 Refer) impacts and
rewards.
N/A
3.5.2. Do you plan to market your product to customers through email, mail, SMS or outbound telemarketing?
Being able to do this requires modifications to Aprimo, Unica and the underlying database IMDM. The IMDM
resides on the eCDW and helps product managers identify the qualified customers for their promotions, offers
and campaigns. This system needs to be kept current with the new products and modifications to existing
products.
N/A

Page 37 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

3.6 Intellectual Property Indicator


3.6.1. Identify if any part of this project, if applicable, is expected to be internally created, adapted and / or
innovated by our employees or contractors. Submit an invention disclosure via the AT&T Patent Disclosure
System (PDS) if applicable.
N/A

4 RISK MANAGEMENT / LEGAL


4.1. Revenue Assurance
4.1.1. What applications and/or systems will be used to bill the service/product?
BAU.
4.1.2. If this is an existing product, will this effort change the biller? (e.g. TDM to Enabler, CRIS to Universal Biller)?
NA
4.1.3. If this is an existing product, will this effort change the way the customer is billed? e.g. display on the bill,
advance vs. arrears billing, usage. one-time charges).
N/A
4.1.4 Will there be third party involvement in this effort / with this product? Explain.
N/A
4.1.5. Is there revenue sharing involved with this effort?
No.
4.1.6. Will customer information be taken at the point of sale? Explain.
N/A
4.1.7. What applications and/or systems will be used to order the service/product?
N/A
4.1.8. Describe what monitoring procedure/system/tool will be used to validate that charges are correctly billed and
continue to bill customers.
N/A
4.1.9. Describe what monitoring procedure/system/tool will be used to validate services and products ordered are
accurately invoiced to customers.
N/A
4.1.10. Describe the testing environment associated with this product, if known IST, UAT, ETE.
N/A
4.2 Revenue Accounting
4.2.1. List and describe Revenue Accounting impacts or considerations for this project, e.g., new revenue stream,
reconciliation impacts, third party arrangements, journalizing processes, arrangements, barter agreements or
revenue share agreements.
N/A

Page 38 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

4.3. Billing / Collections


4.3.1. List and describe Billing impacts for the new/changed product or service. For example, identify if there are
new and / or changes in the customer billing definition as a result of new and / or provisioning changes, new
and / or changed billing usage data, 3rd party billing, changes to billing applications, etc.
N/A (No impacts to Billing and collections)
4.3.2 Will there be billing format changes and/or a new line on the bill?
N/A
4.3.3. Describe any impacts to Credit & Collections. Describe any changes to the Payment Processing procedures or
communication to the customer.
None
4.3.4. Does this project/effort introduce opportunity for combined/converged/unified billing? If so, how?
No.
4.3.5. Does your project, new product or service have impacts to the Customer Service Summary? For more
information, please contact us at DL-CSSTeam@[Link].
No
4.3.6. USF- Identify eligibility and requirements associated with all Federal & State USF programs.
• Federal:
o E-Rate (K-12 Schools, including Tribal and Public Libraries) Must engage E-Rate Product Eligibility
team.
• Rural Health Care (RHC), Lifeline
• State: Oklahoma USF, California Teleconnect Fund

None
4.3.7 Does your change involve changes in Concessions, Surcharges, Taxes, or Fees? If so, how?
No
4.3.8 Does your project involve billing for services provided by any third-party service provider (i.e., any non-AT&T
company)? If yes, the Billing & Collections team, B&C attorney and Compliance must be engaged to evaluate
possible Wireline Consent Decree implications.
No

4.4 Sarbanes-Oxley
4.4.1. List and describe any applicable Sarbanes-Oxley revenue and/or billing expectations regarding impacts to
existing controls or the need for new controls, e.g., new/retiring systems, interfaces, processing, revenue
streams.
TBD

Page 39 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

4.5 Surcharge / Fees and Taxing Obligations


4.5.1. List and describe any applicable surcharges/fees, governmental assessments and taxing obligations.
Surcharge/fees and governmental assessments require concurrence and approval of regulatory and finance
SMEs prior to promotion and product launch. Client/Product Owner must complete the Service Surcharge
Evaluation and list the name of Product Manager and the authorized attorney who approved the revenue
structure of the product before the evaluation can be sent for surcharge application assessment.
BAU
4.5.2. Are we at risk of fines or levies if this function(s) is unavailable?
No
4.5.3. List and describe applicable Legal and Regulatory (international, federal, state) impacts for this product in the
following subsections: (Links are embedded to assist you in evaluating impacts.)
• Tariff Considerations
• Regulatory Fees (UCC, USF, etc.)
• Most of World (MOW) / International Regulatory Considerations
• Other (e.g., E911, CALEA, HIPAA)

BAU
4.6 Accessibility for People with Disabilities

4.6.1 International, US federal, and state laws require AT&T to consider accessibility as early as possible in the
project lifecycle and then implement accessibility and/or compatibility features for certain new and
redesigned products, services, software, applications, networks, and websites. All projects shall maintain
compliance with the AT&T Corporate Accessibility Policy and all applicable regulations. To determine
whether these accessibility laws apply to specific projects, AT&T has established the Corporate Accessibility
Technology Office (CATO) to partner with business units. Engagement with CATO occurs after meeting the
following criteria:
• Scope defined
• In Plan Status (When in Plan Required)
• CPMO Approval
• Funding Source Known
Will submit for CATO and follow CATO guidelines.
4.7. PSAP (Product Services Accounting Process) review
4.7.1. If this initiative impacts revenue, taxes, surcharges, and / or product revenue reporting, a PSAP (Product
Services Accounting Process) review is required.
PSAP (To initiate a PSAP review please directly access the ePSAP tool if you have current access
permissions at the link provided below:
ePSAP Tool
If you require ePSAP access, please click on the document link below for direction:
ePSAP Request Assess
Page 40 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

The PSAP review will provide guidance on correct and compliant:


• Revenue general ledger accounts (GLA) and applicable revenue codes
• Required Tax treatments and codes
• GAAP, corporate accounting and revenue recognition requirements
• Surcharge assessment requirements
• Product Revenue Reporting classifications and mappings
• Public Policy, Regulatory Accounting, Billing Operations, Accounting Classifications and Legal partnering
as needed.

4.8. Fraud
4.8.1. List and describe any applicable fraud prevention processes, reporting or usage feeds that are available; or will
be implemented to detect, monitor, investigate, intervene confirmed instances of abuse over network, products
and services, systems or processes. (i.e. impact to usage/session/data feeds for fraud monitoring applications -
GFMS)
N/A
4.8.2 List and describe Fraud impacts or considerations for this project, e.g., ability to game system, inadequate
eligibility controls, weak activation controls at POS, new revenue stream, reconciliation impacts, third party
arrangements, journalizing processes.
N/A
4.9. Privacy
4.9.1. Will the project/effort collect, access, or in any way impact any of the following? Private, personal information
as predefined in ASPR-0206: AT&T Proprietary (Sensitive Personal Information) Data Elements which, if
compromised or exposed, could present a risk to individuals and would legally require AT&T to disclose the
exposure.
No
4.9.2. Access a list of Sensitive Personal Information (SPI) and list any impact to the stated data elements.
No
4.9.3. To use the product/service associated with this project/effort will the customer/user need to
add/change/delete any information such as email address, postal address, password/passcode, security
question/answer, authorized account users, etc.?
No
4.9.4. Does this project/effort require controls or preferences to be offered to the customer? (Example: Opt Out or Opt
In)?
No
4.9.5. If a new customer account is being created or existing account updated, or access to customer information,
describe how privacy considerations will be handled, such as the collection of and ability to view personal
information (SPI, CPNI, PII), triggering and recording mandated account changes that protect customer account
privacy, and the proposed use of the information such as sharing or distribution. if the product or service is
targeted to youth ensure the COPPA checklist is completed.
N/A

Page 41 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

4.9.6 Does this project/effort include targeting the youth segment (under 18)?
No
4.10. Security
(Required for Initial Scope Document)

List and describe any applicable security standard expectations. AT&T corporate security information is accessible
via the AT&T Chief Security Office (CSO).
Security requirements map directly to existing AT&T Chief Security Office (CSO) published AT&T Security Policy
and Requirements (ASPR) documentation. The ASPR publication is composed of nine (9) major security domains that
are further divided into sub-domains which organize security requirements into logical groupings.
Please refer to the AT&T Overview document that provides descriptions which serve to introduce the ASPR
security domains.
• The solution must adopt and implement a Zero Trust architecture that is based on published Zero Trust
principles and guidance. (See: NIST Special Publication 800-207)
• The solution must meet all relevant ASPR, NIST, J-10, O-RAN, NTIA, ETSI and 3GPP security
requirements
• The solution must meet the General Security Aspects of Zero-touch Operations and Zero-touch Network
and Service Management. (See: ETSI GR ZSM 010 V1.1.1)
• The solution must meet requirements for the Agentless operations and must allow for CSO tooling
(agents) to be installed on the EIAPplatform or EIAPcomponents (as applicable) to meet existing and
evolving security reporting standards.
• The solution must integrate with existing AT&T Identity and Access Management (IAM) systems.
• The solution must integrate with existing AT&T Security Information and Event Management (SIEM)
systems.
• The solution must integrate with existing AT&T internal image repositories for deployable artifacts (e.g.,
Operating System Gold Images, Virtual Machine Gold Images, Container Gold Images, firmware images,
firmware updates etc.).
• The solution must integrate with existing AT&T change control policies and tooling.
• The solution must integrate with existing AT&T internal CI/CD tooling.
• The solution must be able to manage the securing of distributed (e.g., remote location) hardware.

Security Considerations

Check all
Risk Driver that Description (if Other)
Apply

2.15.1. Identify the application type

Application Type:

Web Interface/Portal No

Client for the Mobile and/or Internet of Things (IOT) No

Page 42 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Security Considerations

Check all
Risk Driver that Description (if Other)
Apply

API No

Other Yes New vDU, vCU containerized RAN network functions (CNFs)

2.15.7. Identify Access and the User Identity

User roles:

Admin No User Roles will be managed by AT&T IAM systems.

CSR (Certificate Signing Request) YES Certificates will be applied

Other

Access:

Federated No

AT&T Managed Yes AT&T will be able to access/manage

Vendor Managed NO

Other NO

2.15.8. Identify Third Party Involvement

Will the vendor or 3rd party:

Vendor must adhere to the Security Requirements provided in the


Provide product/service YES
(see above).

Query AT&T systems No

Host the service/product No

Will the vendor or 3rd party interact with Customer


Information:

Store Customer Information No

Share Customer Information NO

Page 43 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Security Considerations

Check all
Risk Driver that Description (if Other)
Apply

Sell Customer Information NO

2.15.9. Identify the Infrastructure

Cloud YES Ericsson Cloud RAN

Client/Server NO

4.11. Tax
4.11.1 List and describe Tax impacts or considerations for this project, e.g., third party involvement, new revenue
streams, reconciliation/reporting issues, tax exemption issues, or tax compliance impacts.
N/A

4.12. Net Neutrality


The FCC’s 2015 Net Neutrality Order reclassified broadband Internet access services as Title II telecommunications
services and imposed new net neutrality rules. The rules apply equally to mass market (not enterprise) mobile and fixed
broadband service. “Mass market” is defined as a service sold on standard terms to residential customers, small
businesses, and services purchased with E-rate and Rural Healthcare support, as well as services offered using USF
support.
4.12.1 Does this project involve mass market broadband Internet access services (mobile and/or fixed broadband)?
No
4.12.2 If this project involves mass market broadband Internet access services (mobile and/or fixed broadband),
answer the following question:
• Does this project violate any of the “Bright Line Rules” or “No Unreasonable Interference or Unreasonable
Disadvantage to Consumers or Edge Providers” net neutrality rules as outlined in FCC Net Neutrality 101?
If the answer is yes to any one of these rules or if unsure, contact your Business Unit Attorney.
No known violation’s

Page 44 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

5 METRICS / TABLES
Forecasted Volume

Note:
• If the KPI does not apply to the project, mark as N/A.
• Many projects have unique KPI’s.
• Targets are set and tracked by the Client/PO.
N/A

Table 2 - Customer and Sales Experience Key Performance Indicators (KPIs)

Track? 3 6
Description Year 1 Year 2 Year 3 Year 4 Comments
(Y/N) months months

Rate of Penetration
yY
Number of Subscribers

Percent Web Sales YY

Percent Web Based


Self-Service

Sales Interval (e.g.


Average Handle
Time/Wait Time)

Revenue Intensity

Per sub Cost of Service

Per sub Average


Revenue Per Unit
(ARPU)

Return on Operations

Gross Margin

Operational Expense

Revenue Generated
from this product

Reputation

Page 45 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

Table 2 - Customer and Sales Experience Key Performance Indicators (KPIs)

Track? 3 6
Description Year 1 Year 2 Year 3 Year 4 Comments
(Y/N) months months

Churn and / or Cancel YY


Rate

Acceptable Monthly
Downtime (Service /
Network Reliability)

Percentage of Outages

Number of Calls into


Customer Service (Call
YY
Rate)

Ticket to Deal Ratio


(TTDR) for Sales facing
functions/tools

Billing Accuracy

Due Date / Delivery


Interval

Install/Set Up / YY
Activation Interval

Repair Duration (Out of


Service (OOS) / Service
Affecting)

6 DEFINITION OF TERMS/ABBREVIATIONS/ACRONYMS
(REQUIRED FOR INITIAL SCOPE DOCUMENT)

ACRONYMS

CaaS Container as a Service

CCD Cloud Container Distribution

CNIS Cloud Native Infrastructure System

Page 46 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

CO Central Office where V-RAN equipment will be housed

COTS Commercial Off The Shelf Hardware

CR Cloud Radio Access Network

CU Central Unit Radio Access Network

DU Distributed Unit Radio Access Network

EIAP Ericsson Intelligent Automation Platform (Ericsson’s SMO)

ENM Ericsson network manager

FCAPS Fault, Configuration, Accounting, Performance and Security

FDD Frequency-Division Duplexing

gNB 5G or Next Generation Base station

GW Gateway nodes are part of core network handling user plane and
control plane

HW Hardware

IP Internet Protocol

MTSO Mobile Telephone Switching Office

NR New Radio

NSA 5G non-Standalone

O&M Operation and Maintenance

OMC Operations Manager Cloud Infrastructure

PIZ AT&T’s Product Innovation Zone

POD Performance Optimized Datacenter

RAN Radio Access Network

RAT Radio Access Technology

RRU Remote Radio Unit

RU Radio Unit

Page 47 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

SA 5G Stand Alone

SDI Software Defined Infrastructure

SIM Subscriber Identity Module

SMO Service Management and Orchestration

STA Special Temporary Authority for use of spectrum

SW Software

TDD Time-Division Duplexing

TOL Test Object List means a document listing the applicable test
cases for the proposed solution.

UE User Equipment for 5G consisting of RRH, MTP, gateway with


home facing interfaces

vCU virtualized Centralized Unit Radio Access Network

vDU virtualized Distributed Unit Radio Access Network

K8s Kubernetes

Acronym Term Description / Definition

7 SCOPE DOCUMENT APPROVAL

Baseline Scope Document


PROJECT TEAM APPROVAL

By their signature the participants identified below agree that this document is acceptable and complete to the
best of their knowledge and will be used by the Project Team as an official deliverable for the project. E-mail
responses should be saved and embedded in the signature column of this table or be made available in another

Page 48 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

format, if requested.

Participant Role / Department / Signature Date Approval Comments


Group Received

Page 49 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement
Project Scope Document

8 DOCUMENT VERSION CONTROL


Successive iterations of the pre-baseline PSD are numbered 0.2, 0.3, etc. The versioning Table in the PSD template should
be maintained with each version checked into P8.

Version # Version Date Description of Changes Impacted Sections Author

0.1 01/09/24 Initial PSD All Daniela Diefenbach

Page 50 of 50

AT&T Proprietary (Internal Use Only)


Not for use or disclosure outside the AT&T companies except under written agreement

You might also like